Unraid starts to be laggy as hell after a while and dies


Recommended Posts

I have no idea what is happening. Since a few days it's the second or third time that it happened. Nothing special was done. Earlier I thought it was because people were watching Plex on my server and killed my bandwidth and more (even though it makes no sense to get the problem that I have).

 

It randomly appears and starts by some Docker containers getting unhealthy and restarting. And then the loop starts. If I want to go to the unRAID web UI it's loading 1 minute if I'm lucky. I was able to make a screenshot but I guess the high load occurs because all containers get unhealthy and the autoheal tool restarts them all and they never get healthy again.

 

The only suspicous thing I also realized is that my Nextcloud Docker instance also starts to fail several times a day since a few days. The 200 status code turns into 503 and Nextcloud enables automatically the maintenance mode and I always have to set it to off again and everything is just fine.

 

I was lucky to get the diagnostics before it was completly unusable and unresponsive. The only thing that helps is a restart. A parity check was finished today without errors. I have changed nothing on my hardware setup. Also the unRAID version 6.12.6 is installed since a while now. Temperatures are also okay with temps around 40 degree. I'm running nearly 100 Docker containers but that's all. No VM was running or anything else.

 

I had a look into the diagnostics docker.log and you can also see what I described in the last 3 days because the logs are there:

time="2024-01-31T01:02:31.586877851+01:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
time="2024-01-31T01:02:31.586918839+01:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
time="2024-01-31T01:02:31.586926061+01:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1

 

This repeats very often. I don't know what is causing this error. Is it some kind of memory leak? But look at the screenshot there still is enough RAM... please help me!

Screenshot 2024-02-01 015447.jpg

Screenshot 2024-02-01 015453.jpg

unraid-server-diagnostics-20240201-0155.zip

It's not even possible to restart. After clicking restart and waiting 10 minutes I can get back on the Dashboard after a while to see this:

image.thumb.png.b5a10ae997a1fd21502bc2664c8db604.png

 

Edit: Another thing I realized now is that my Nextcloud for example runs so much faster after that reboot but will get slower from time to time. Must not have anything to do with Unraid but I realized it.

Edited by sasbro97
image added
Link to comment

@trurl

 

Filesystem                                                                     Size  Used Avail Use% Mounted on
rootfs                                                                          32G  532M   31G   2% /
tmpfs                                                                           32M  5.9M   27M  19% /run
/dev/sda                                                                       7.5G  1.1G  6.5G  14% /boot
overlay                                                                         32G  532M   31G   2% /lib
overlay                                                                         32G  532M   31G   2% /usr
devtmpfs                                                                       8.0M     0  8.0M   0% /dev
tmpfs                                                                           32G     0   32G   0% /dev/shm
tmpfs                                                                          4.0G  856K  4.0G   1% /var/log
tmpfs                                                                          1.0M     0  1.0M   0% /mnt/disks
tmpfs                                                                          1.0M     0  1.0M   0% /mnt/remotes
tmpfs                                                                          1.0M     0  1.0M   0% /mnt/addons
tmpfs                                                                          1.0M     0  1.0M   0% /mnt/rootshare
/dev/md1p1                                                                      17T   14T  2.6T  85% /mnt/disk1
/dev/md2p1                                                                     5.5T  2.9T  2.6T  53% /mnt/disk2
cache                                                                          710G  384K  710G   1% /mnt/cache
safety                                                                         2.5T   31G  2.4T   2% /mnt/safety
cache/domains                                                                  720G   11G  710G   2% /mnt/cache/domains
cache/domains/Ubuntu-File-Transfer                                             710G  128K  710G   1% /mnt/cache/domains/Ubuntu-File-Transfer
cache/appdata                                                                  711G  1.6G  710G   1% /mnt/cache/appdata
cache/system                                                                   710G  319M  710G   1% /mnt/cache/system
cache/development                                                              710G  7.3M  710G   1% /mnt/cache/development
cache/appdata/sablier                                                          710G  128K  710G   1% /mnt/cache/appdata/sablier
cache/appdata/remotely                                                         710G  512K  710G   1% /mnt/cache/appdata/remotely
cache/appdata/redis                                                            710G  3.0M  710G   1% /mnt/cache/appdata/redis
cache/appdata/nginx-proxy-manager                                              710G   16M  710G   1% /mnt/cache/appdata/nginx-proxy-manager
cache/appdata/adguardhome                                                      711G  753M  710G   1% /mnt/cache/appdata/adguardhome
cache/appdata/semaphore                                                        710G  256K  710G   1% /mnt/cache/appdata/semaphore
cache/appdata/duplicati                                                        710G   86M  710G   1% /mnt/cache/appdata/duplicati
cache/appdata/homepage                                                         710G  1.4M  710G   1% /mnt/cache/appdata/homepage
cache/appdata/changedetection.io                                               710G  1.5M  710G   1% /mnt/cache/appdata/changedetection.io
cache/appdata/onedev                                                           710G  375M  710G   1% /mnt/cache/appdata/onedev
cache/appdata/smart-home                                                       714G  4.2G  710G   1% /mnt/cache/appdata/smart-home
cache/appdata/n8n                                                              710G   68M  710G   1% /mnt/cache/appdata/n8n
cache/appdata/portainer                                                        710G   18M  710G   1% /mnt/cache/appdata/portainer
cache/appdata/ldap                                                             710G   50M  710G   1% /mnt/cache/appdata/ldap
cache/appdata/unmanic                                                          710G  128K  710G   1% /mnt/cache/appdata/unmanic
cache/appdata/hrconvert2                                                       710G  128K  710G   1% /mnt/cache/appdata/hrconvert2
cache/appdata/mediaserver                                                      766G   57G  710G   8% /mnt/cache/appdata/mediaserver
cache/appdata/monitoring                                                       716G  6.0G  710G   1% /mnt/cache/appdata/monitoring
cache/appdata/website                                                          711G  962M  710G   1% /mnt/cache/appdata/website
cache/appdata/wiki                                                             710G  258M  710G   1% /mnt/cache/appdata/wiki
cache/appdata/endlessh                                                         710G  128K  710G   1% /mnt/cache/appdata/endlessh
cache/appdata/finance                                                          710G   27M  710G   1% /mnt/cache/appdata/finance
cache/appdata/snippet-box                                                      710G  128K  710G   1% /mnt/cache/appdata/snippet-box
cache/appdata/code-server                                                      710G  6.3M  710G   1% /mnt/cache/appdata/code-server
cache/appdata/ddclient                                                         710G  128K  710G   1% /mnt/cache/appdata/ddclient
cache/appdata/authelia                                                         710G  6.0M  710G   1% /mnt/cache/appdata/authelia
cache/appdata/downloader                                                       710G  308M  710G   1% /mnt/cache/appdata/downloader
cache/appdata/speedtest-tracker-db                                             710G   46M  710G   1% /mnt/cache/appdata/speedtest-tracker-db
cache/appdata/cloud                                                            712G  2.4G  710G   1% /mnt/cache/appdata/cloud
cache/appdata/deprecated                                                       710G  238M  710G   1% /mnt/cache/appdata/deprecated
cache/appdata/speedtest-tracker                                                710G  128K  710G   1% /mnt/cache/appdata/speedtest-tracker
cache/appdata/healthchecks                                                     710G  896K  710G   1% /mnt/cache/appdata/healthchecks
cache/appdata/dms                                                              710G  152M  710G   1% /mnt/cache/appdata/dms
cache/appdata/urbackup                                                         710G  212M  710G   1% /mnt/cache/appdata/urbackup
cache/appdata/traggo                                                           710G  128K  710G   1% /mnt/cache/appdata/traggo
cache/appdata/deprecated/bookstack                                             710G  128K  710G   1% /mnt/cache/appdata/deprecated/bookstack
cache/appdata/deprecated/bookstack-db                                          710G  128K  710G   1% /mnt/cache/appdata/deprecated/bookstack-db
safety/cloud                                                                   2.5T  125G  2.4T   5% /mnt/safety/cloud
safety/Sascha                                                                  3.2T  768G  2.4T  25% /mnt/safety/Sascha
safety/backup                                                                  2.4T  2.3G  2.4T   1% /mnt/safety/backup
safety/Musik                                                                   2.5T   60G  2.4T   3% /mnt/safety/Musik
shfs                                                                            22T   17T  5.2T  77% /mnt/user0
shfs                                                                            22T   17T  5.2T  77% /mnt/user
/dev/loop2                                                                     1.0G  4.6M  904M   1% /etc/libvirt
cache/system/e8a1eefe454230f065bf35c4359f9eff44a4ebe8325e75a4db9a50d813468cf6  712G  2.1G  710G   1% /var/lib/docker/zfs/graph/e8a1eefe454230f065bf35c4359f9eff44a4ebe8325e75a4db9a50d813468cf6
cache/system/1be623491e2f0e171707e79c7d26a89e094ddcb27a6d669ac02d0f6446f2ff3d  711G  1.1G  710G   1% /var/lib/docker/zfs/graph/1be623491e2f0e171707e79c7d26a89e094ddcb27a6d669ac02d0f6446f2ff3d
cache/system/a4d7cfe504453602e7264c76c2c83227e137c005df472e4404d57089d3cc3278  712G  1.8G  710G   1% /var/lib/docker/zfs/graph/a4d7cfe504453602e7264c76c2c83227e137c005df472e4404d57089d3cc3278
cache/system/59fbd4fdc2fce9311d3020c592abe3d97139ab4415ec28a586e6b885f98c0671  710G  498M  710G   1% /var/lib/docker/zfs/graph/59fbd4fdc2fce9311d3020c592abe3d97139ab4415ec28a586e6b885f98c0671
cache/system/92a90aeec7a97f4640229eea72a376db732b9678879e83ac1441fbd535f76bd1  710G   16M  710G   1% /var/lib/docker/zfs/graph/92a90aeec7a97f4640229eea72a376db732b9678879e83ac1441fbd535f76bd1
cache/system/5769a7086692bed5aa2248a9eedafab558d8f0bcc175ee8fbaff8ac66938b8b8  710G  239M  710G   1% /var/lib/docker/zfs/graph/5769a7086692bed5aa2248a9eedafab558d8f0bcc175ee8fbaff8ac66938b8b8
cache/system/e5dbccff166d24bff3024b4385317c8da5d3e38788625c3bbf704765483f073e  710G  112M  710G   1% /var/lib/docker/zfs/graph/e5dbccff166d24bff3024b4385317c8da5d3e38788625c3bbf704765483f073e
cache/system/9e33d8b61635798d0856e0686ff349d86102c9366991a65e1460cc65b3d7afc7  710G   41M  710G   1% /var/lib/docker/zfs/graph/9e33d8b61635798d0856e0686ff349d86102c9366991a65e1460cc65b3d7afc7
cache/system/d50c80bd567b183b5024742c02388e9110e3841e9782600b94e2f360563b4e4c  711G  979M  710G   1% /var/lib/docker/zfs/graph/d50c80bd567b183b5024742c02388e9110e3841e9782600b94e2f360563b4e4c
cache/system/ae6820b44cab95f60829264253b76f7832faea57780022b7595bd95a0b73acb7  710G  427M  710G   1% /var/lib/docker/zfs/graph/ae6820b44cab95f60829264253b76f7832faea57780022b7595bd95a0b73acb7
cache/system/e3834f9dbeb365ee65c1c71111dd0b4920400896c9b8ef145460c04dc9a36d39  710G  242M  710G   1% /var/lib/docker/zfs/graph/e3834f9dbeb365ee65c1c71111dd0b4920400896c9b8ef145460c04dc9a36d39
cache/system/ab373807429b49d2e9e6f17b493b2066f59acb5eae42874aa8a1c403bee7e4ca  710G  427M  710G   1% /var/lib/docker/zfs/graph/ab373807429b49d2e9e6f17b493b2066f59acb5eae42874aa8a1c403bee7e4ca
cache/system/2b2f711c33bb7e97a3895c8bd1474c54d1a7c0e007e7d532f34f9258548a90a9  710G  156M  710G   1% /var/lib/docker/zfs/graph/2b2f711c33bb7e97a3895c8bd1474c54d1a7c0e007e7d532f34f9258548a90a9
cache/system/88373cdbfa1707d0e58e99b0325c4aedff3e1145092cb4f03779f8bbf137fa32  710G   12M  710G   1% /var/lib/docker/zfs/graph/88373cdbfa1707d0e58e99b0325c4aedff3e1145092cb4f03779f8bbf137fa32
cache/system/bf07c2720311d0ab417da4852319fd034d382d1630344ab049ab5ce454c061f7  710G  620M  710G   1% /var/lib/docker/zfs/graph/bf07c2720311d0ab417da4852319fd034d382d1630344ab049ab5ce454c061f7
cache/system/ee9365e2e6e922e3c41b5bd7218df5dc931c3153bf5ae2809567a6c9d44616c5  710G  192M  710G   1% /var/lib/docker/zfs/graph/ee9365e2e6e922e3c41b5bd7218df5dc931c3153bf5ae2809567a6c9d44616c5
cache/system/d58c024bc4048c6119c96bbd5baf5abf7a153fba84101db95e0f1647d0f2ed91  710G  198M  710G   1% /var/lib/docker/zfs/graph/d58c024bc4048c6119c96bbd5baf5abf7a153fba84101db95e0f1647d0f2ed91
cache/system/ca1d75b92dee570be7443918c3236847fdaf09979989734dc2dffe143d15451f  710G  118M  710G   1% /var/lib/docker/zfs/graph/ca1d75b92dee570be7443918c3236847fdaf09979989734dc2dffe143d15451f
cache/system/76699a3fd3defb8c93f22c6cb740450c6f7759c98f12605cc64edb05a67db706  710G  262M  710G   1% /var/lib/docker/zfs/graph/76699a3fd3defb8c93f22c6cb740450c6f7759c98f12605cc64edb05a67db706
cache/system/a0c5888e582921b424027faed60a900ff60c4d1a4769925ef1b7a68e28328a85  710G  236M  710G   1% /var/lib/docker/zfs/graph/a0c5888e582921b424027faed60a900ff60c4d1a4769925ef1b7a68e28328a85
cache/system/68d1a52cde55b6e15e13cb06b146273b0b2c52376bbcd800d59066cb72577ab6  711G  843M  710G   1% /var/lib/docker/zfs/graph/68d1a52cde55b6e15e13cb06b146273b0b2c52376bbcd800d59066cb72577ab6
cache/system/0d5b366eba8642ecb836cd33ba3aa01b2e87ebc324f74cd3523336239a93df0d  710G  117M  710G   1% /var/lib/docker/zfs/graph/0d5b366eba8642ecb836cd33ba3aa01b2e87ebc324f74cd3523336239a93df0d
cache/system/2fe6ae13c96d20a6a9de6c8a492eecd9b7fa733d19e6fb56a4203709e2d00918  710G   49M  710G   1% /var/lib/docker/zfs/graph/2fe6ae13c96d20a6a9de6c8a492eecd9b7fa733d19e6fb56a4203709e2d00918
cache/system/0cc2d7dbd265cfaf78628bf7c02c58355949db483e6e06a49902110e8cc443ce  710G  515M  710G   1% /var/lib/docker/zfs/graph/0cc2d7dbd265cfaf78628bf7c02c58355949db483e6e06a49902110e8cc443ce
cache/system/0f150e957c5055f35e48c6daa388a950902a1345ca775908625e8f550c73472d  710G   25M  710G   1% /var/lib/docker/zfs/graph/0f150e957c5055f35e48c6daa388a950902a1345ca775908625e8f550c73472d
cache/system/cb9e020fc90aabc8649d72645709b7aa5685d648465f5bfd0d3d65aeee54715d  710G  409M  710G   1% /var/lib/docker/zfs/graph/cb9e020fc90aabc8649d72645709b7aa5685d648465f5bfd0d3d65aeee54715d
cache/system/f5437360d7b9733f18f7ee64b6ce70f7b31de389d0cd898a287a010274dc354f  710G   89M  710G   1% /var/lib/docker/zfs/graph/f5437360d7b9733f18f7ee64b6ce70f7b31de389d0cd898a287a010274dc354f
cache/system/426900ef5cf827ad2f3a3324a57dc3383cf61df78d73d91a08f3035c924daccf  710G   22M  710G   1% /var/lib/docker/zfs/graph/426900ef5cf827ad2f3a3324a57dc3383cf61df78d73d91a08f3035c924daccf
cache/system/1232c42eb15a2edec0f1c755688ce944be61df5729a29b86a4eb426c078356d4  710G  138M  710G   1% /var/lib/docker/zfs/graph/1232c42eb15a2edec0f1c755688ce944be61df5729a29b86a4eb426c078356d4
cache/system/f2a3fc6e18dd2b1c92e9b541dee8acf044128d244bb1ef7cde789808a9982760  710G  206M  710G   1% /var/lib/docker/zfs/graph/f2a3fc6e18dd2b1c92e9b541dee8acf044128d244bb1ef7cde789808a9982760
cache/system/318aafd0d654fe8aeac089c8a8b48ada56cb228508dbb0d673faecbef4d283d9  710G  147M  710G   1% /var/lib/docker/zfs/graph/318aafd0d654fe8aeac089c8a8b48ada56cb228508dbb0d673faecbef4d283d9
cache/system/31e24ae7cc7fbeb4d2e155ef053ff39826763ee73dc485a988af1eca2a9a4f9c  710G  113M  710G   1% /var/lib/docker/zfs/graph/31e24ae7cc7fbeb4d2e155ef053ff39826763ee73dc485a988af1eca2a9a4f9c
cache/system/cc8d88bfcf31ede6da1daa58ce57cc38d8033ddbc42cc7a881c864130ab24ddf  710G   84M  710G   1% /var/lib/docker/zfs/graph/cc8d88bfcf31ede6da1daa58ce57cc38d8033ddbc42cc7a881c864130ab24ddf
cache/system/5f23404bdb8171b1bac01ba0251561c819cdbdc1cc6e1c157a8e0d813ae8ee54  710G  293M  710G   1% /var/lib/docker/zfs/graph/5f23404bdb8171b1bac01ba0251561c819cdbdc1cc6e1c157a8e0d813ae8ee54
cache/system/f644c904d6ffb22b799256109b17d38f6d8956bf2a4dba45c649a8b4e816cd10  710G   75M  710G   1% /var/lib/docker/zfs/graph/f644c904d6ffb22b799256109b17d38f6d8956bf2a4dba45c649a8b4e816cd10
cache/system/ab509feb7fdc8db986f4fc77503854a2a193ba076332f34fab699423185efc99  710G   32M  710G   1% /var/lib/docker/zfs/graph/ab509feb7fdc8db986f4fc77503854a2a193ba076332f34fab699423185efc99
cache/system/1f32e5abee5938a1ca836f933322f63093d15dfc431ba9650b92bb987f607243  710G  179M  710G   1% /var/lib/docker/zfs/graph/1f32e5abee5938a1ca836f933322f63093d15dfc431ba9650b92bb987f607243
cache/system/b1c5c1461a5c836b3bd1ff239957c573a110639b12aa177b0b147e4e7b60ee3d  711G 1013M  710G   1% /var/lib/docker/zfs/graph/b1c5c1461a5c836b3bd1ff239957c573a110639b12aa177b0b147e4e7b60ee3d
cache/system/0b16d0d52de3f72e3123ff5b82b5f64993956985a5d021b6e248b85abc6f90eb  711G  971M  710G   1% /var/lib/docker/zfs/graph/0b16d0d52de3f72e3123ff5b82b5f64993956985a5d021b6e248b85abc6f90eb
cache/system/c7289e222a0b9b990b35ee9744a03a6b1af16daf160ef19592eee874bd75d99f  710G  9.8M  710G   1% /var/lib/docker/zfs/graph/c7289e222a0b9b990b35ee9744a03a6b1af16daf160ef19592eee874bd75d99f
cache/system/056cf475f09055b5d35b58b48e5d9b51d91cd6c0d0859868b160d8c9d6787276  711G 1009M  710G   1% /var/lib/docker/zfs/graph/056cf475f09055b5d35b58b48e5d9b51d91cd6c0d0859868b160d8c9d6787276
cache/system/8ebc1fdd867c73f796e2d539963e54f0d4005300d50640403694da0ffc08dce8  710G  535M  710G   1% /var/lib/docker/zfs/graph/8ebc1fdd867c73f796e2d539963e54f0d4005300d50640403694da0ffc08dce8
cache/system/4302abed5ea57ea1c2bce0f3e86ad84fc3630215903d9223eeffc21f0e075f49  710G  198M  710G   1% /var/lib/docker/zfs/graph/4302abed5ea57ea1c2bce0f3e86ad84fc3630215903d9223eeffc21f0e075f49
cache/system/704d69dd6da682ef0e7a93422abb8215b6db30993c81b7adde6dcc8f8bbcb877  710G  198M  710G   1% /var/lib/docker/zfs/graph/704d69dd6da682ef0e7a93422abb8215b6db30993c81b7adde6dcc8f8bbcb877
cache/system/84919403db27c2e0e7f4e7c86efe91c083649ed65b29aa0eb77c21ae830c6833  710G  459M  710G   1% /var/lib/docker/zfs/graph/84919403db27c2e0e7f4e7c86efe91c083649ed65b29aa0eb77c21ae830c6833
cache/system/82a03784cd53ba8689f21501c292fd50ad316ba3521541f09c1d2af11fea1ad0  711G  1.3G  710G   1% /var/lib/docker/zfs/graph/82a03784cd53ba8689f21501c292fd50ad316ba3521541f09c1d2af11fea1ad0
cache/system/ed4806db10584dd0515222b648816fb6e98ad3b7be446054e261153920074ffe  710G  352M  710G   1% /var/lib/docker/zfs/graph/ed4806db10584dd0515222b648816fb6e98ad3b7be446054e261153920074ffe
cache/system/9492c03d3a793b567d76e4b2b17df6e2ca40c3b1659fa048fbf319312c36e4d6  710G  198M  710G   1% /var/lib/docker/zfs/graph/9492c03d3a793b567d76e4b2b17df6e2ca40c3b1659fa048fbf319312c36e4d6
cache/system/3af3c1cd4545da5686dfaa6fa1c0d21c1e4fe344258bfc76f93a56d8d66ddb32  710G   12M  710G   1% /var/lib/docker/zfs/graph/3af3c1cd4545da5686dfaa6fa1c0d21c1e4fe344258bfc76f93a56d8d66ddb32
cache/system/6c4ad09a4c2d03f2699943970aef0f3f99d7435890dbb164894eb09444825ad2  710G   30M  710G   1% /var/lib/docker/zfs/graph/6c4ad09a4c2d03f2699943970aef0f3f99d7435890dbb164894eb09444825ad2
cache/system/2088778e022a8b024c0fabcd4a40ad1797c02ffe6ffca1ed7b87828868e4aa4e  710G   36M  710G   1% /var/lib/docker/zfs/graph/2088778e022a8b024c0fabcd4a40ad1797c02ffe6ffca1ed7b87828868e4aa4e
cache/system/3620a65dddc28d26cf1a101de70d53738aee10729d3eca3571135d93d993c324  710G  242M  710G   1% /var/lib/docker/zfs/graph/3620a65dddc28d26cf1a101de70d53738aee10729d3eca3571135d93d993c324
cache/system/9844935ec93f882121700e0389609e6fad7ccf2e8a1d0113763a19d7654fe829  711G  811M  710G   1% /var/lib/docker/zfs/graph/9844935ec93f882121700e0389609e6fad7ccf2e8a1d0113763a19d7654fe829
cache/system/fff7fe343cd0f6ff0c454599a5fc86612a48ec8a5bfd75830e83fcf52875ef04  710G  356M  710G   1% /var/lib/docker/zfs/graph/fff7fe343cd0f6ff0c454599a5fc86612a48ec8a5bfd75830e83fcf52875ef04
cache/system/8d7de36e597549ca1c32700383e3727c99fce5b7a0d1d6b00c91096d1896a8cc  711G  995M  710G   1% /var/lib/docker/zfs/graph/8d7de36e597549ca1c32700383e3727c99fce5b7a0d1d6b00c91096d1896a8cc
cache/system/d66e8ad26ef673cb3279019d543adfa569197ace7d24737ed946b843ce0e84d2  710G  222M  710G   1% /var/lib/docker/zfs/graph/d66e8ad26ef673cb3279019d543adfa569197ace7d24737ed946b843ce0e84d2
cache/system/a160a867bb39c7d00ce3bbd1f9db3b79df0cc9250ec0db99238c0a7eaca78e59  710G  242M  710G   1% /var/lib/docker/zfs/graph/a160a867bb39c7d00ce3bbd1f9db3b79df0cc9250ec0db99238c0a7eaca78e59
cache/system/ba916a663d93c88c00a204e58230b9b9066605853c5a4341edbe074b687b8704  711G  1.1G  710G   1% /var/lib/docker/zfs/graph/ba916a663d93c88c00a204e58230b9b9066605853c5a4341edbe074b687b8704
cache/system/10e0752573c69fbaf92a94986ca6f6fdbddfddd0f48fcad33e52b11d848e98c9  710G  224M  710G   1% /var/lib/docker/zfs/graph/10e0752573c69fbaf92a94986ca6f6fdbddfddd0f48fcad33e52b11d848e98c9
cache/system/934f49aba3ca787ebcd58d96449f0bc243340350dc808be1742361faa69b9ba1  710G  150M  710G   1% /var/lib/docker/zfs/graph/934f49aba3ca787ebcd58d96449f0bc243340350dc808be1742361faa69b9ba1
cache/system/70694c661997b8425cfe0262f9f7c313e37354f443c0727a51ef7de57258cf60  710G  308M  710G   1% /var/lib/docker/zfs/graph/70694c661997b8425cfe0262f9f7c313e37354f443c0727a51ef7de57258cf60
cache/system/9da8a9ffdb60ac4a58c3b87f51658a85d4bc04d80ccde59ed10041d731f27338  710G  255M  710G   1% /var/lib/docker/zfs/graph/9da8a9ffdb60ac4a58c3b87f51658a85d4bc04d80ccde59ed10041d731f27338
cache/system/3d14e51e421519360ad7573dbf93963ac3ec22276901bf17062dd9f1a665dd6c  710G  262M  710G   1% /var/lib/docker/zfs/graph/3d14e51e421519360ad7573dbf93963ac3ec22276901bf17062dd9f1a665dd6c
cache/system/1fc939cb4a60c443d56b03949f1ad9f53e4c6e1a60d2be30b03e5bf029f991be  711G  1.2G  710G   1% /var/lib/docker/zfs/graph/1fc939cb4a60c443d56b03949f1ad9f53e4c6e1a60d2be30b03e5bf029f991be
cache/system/39198428065fe4898f589b12475be5d64906dbb9b5c470efeb98aac9dd1f14f6  710G  177M  710G   1% /var/lib/docker/zfs/graph/39198428065fe4898f589b12475be5d64906dbb9b5c470efeb98aac9dd1f14f6
cache/system/315fd6f273a61bb43ea3c7da1c02cecf9ed625e28a8d2c86b93b7df3e7a8c3ca  710G  231M  710G   1% /var/lib/docker/zfs/graph/315fd6f273a61bb43ea3c7da1c02cecf9ed625e28a8d2c86b93b7df3e7a8c3ca
cache/system/fc020e963725fb4b5ff1d80eb9964d60b5d48205f6f47ba61af6f4ed34478dc8  710G  109M  710G   1% /var/lib/docker/zfs/graph/fc020e963725fb4b5ff1d80eb9964d60b5d48205f6f47ba61af6f4ed34478dc8
cache/system/76568fd71ecabb1be3c2bd856fa5aabd05567a74952a46ece4ee24cf6bf10ad1  710G  517M  710G   1% /var/lib/docker/zfs/graph/76568fd71ecabb1be3c2bd856fa5aabd05567a74952a46ece4ee24cf6bf10ad1
cache/system/40c09ea2f3f0211739419f86d4950e41b63e387db7d32effa3a974c2f74bfb8d  710G  510M  710G   1% /var/lib/docker/zfs/graph/40c09ea2f3f0211739419f86d4950e41b63e387db7d32effa3a974c2f74bfb8d
cache/system/e823eb420d51743083c278b0c50548b4c0d6a28caa51ed4c285203f758380ec7  710G  251M  710G   1% /var/lib/docker/zfs/graph/e823eb420d51743083c278b0c50548b4c0d6a28caa51ed4c285203f758380ec7
cache/system/89ab6f4fb0545f02cd1ee305d706c8f885f9fd46f3b5f9e8ea35ec9ddf87f18a  710G  490M  710G   1% /var/lib/docker/zfs/graph/89ab6f4fb0545f02cd1ee305d706c8f885f9fd46f3b5f9e8ea35ec9ddf87f18a
cache/system/2dcd863d9ba6f2e7691a2dc27de65eaf9654b6f0dc8c3e911d362976ed6cba69  711G  766M  710G   1% /var/lib/docker/zfs/graph/2dcd863d9ba6f2e7691a2dc27de65eaf9654b6f0dc8c3e911d362976ed6cba69
cache/system/344fa30fbc91e4a9789cab6736bb5b88df740e8201b2052b7bfbf6da984a3606  710G  639M  710G   1% /var/lib/docker/zfs/graph/344fa30fbc91e4a9789cab6736bb5b88df740e8201b2052b7bfbf6da984a3606
cache/system/7c7de960343a6bb558ca27765fd3027706817495cbd463b58a42ac8263377e2c  710G  277M  710G   1% /var/lib/docker/zfs/graph/7c7de960343a6bb558ca27765fd3027706817495cbd463b58a42ac8263377e2c
cache/system/27efacc1298d8cbd821abf2d892aea35300fadad6634de7387dafa482c24fca0  710G  9.2M  710G   1% /var/lib/docker/zfs/graph/27efacc1298d8cbd821abf2d892aea35300fadad6634de7387dafa482c24fca0
cache/system/faaf3b59904aabd5fa6031d1939f35a5187cca844aa78988bd09d222ec8fbdbc  710G   13M  710G   1% /var/lib/docker/zfs/graph/faaf3b59904aabd5fa6031d1939f35a5187cca844aa78988bd09d222ec8fbdbc
cache/system/fbf993becba2893c2b4b5ee066e5aeb1d003f3c0bef5174eee850a38fb2e9e48  710G  198M  710G   1% /var/lib/docker/zfs/graph/fbf993becba2893c2b4b5ee066e5aeb1d003f3c0bef5174eee850a38fb2e9e48
cache/system/4f2bab8ebc9b56f04b6067d25def02128bae2fc7ca07ac39323afb6a864c8be6  710G   11M  710G   1% /var/lib/docker/zfs/graph/4f2bab8ebc9b56f04b6067d25def02128bae2fc7ca07ac39323afb6a864c8be6
cache/system/603733c80a97d8fe778feb8f9a7662e3ba0cdf71a4262c0cc120436b65bd798d  710G  224M  710G   1% /var/lib/docker/zfs/graph/603733c80a97d8fe778feb8f9a7662e3ba0cdf71a4262c0cc120436b65bd798d
cache/system/197538007e358844d75174831ffe0b13e1a62d2de775e7a189ca6c893d1cc3b1  710G  223M  710G   1% /var/lib/docker/zfs/graph/197538007e358844d75174831ffe0b13e1a62d2de775e7a189ca6c893d1cc3b1
cache/system/b24f39da76b5da16b6e8d4010aea115049c560d26ce5eeb85c0171f2cdd9bd56  710G  405M  710G   1% /var/lib/docker/zfs/graph/b24f39da76b5da16b6e8d4010aea115049c560d26ce5eeb85c0171f2cdd9bd56
cache/system/a9dbbbe8afcedfb2591b50eb0eddab322194b0cb3623ab64183526e14d6e6ca4  710G   62M  710G   1% /var/lib/docker/zfs/graph/a9dbbbe8afcedfb2591b50eb0eddab322194b0cb3623ab64183526e14d6e6ca4
cache/isos                                                                     710G  128K  710G   1% /mnt/cache/isos

 

Link to comment

Setup syslog server.

 

And get diagnostics before it begins to get too bad.

 

What I was looking for from that command was just the first result, so you can try it yourself periodically to see how it is going

df -h /

That will just give the first result, which is rootfs. That is where the OS files are (in RAM). If you fill it up things can go bad quickly since the OS has no space to work in.

Link to comment

Does the syslog server just record the system log? Because I had a look on the logs before it crashed and everything was fine. Sometimes before there was this typical NGINX bug that was filling up the log like hell.

 

I'm wondering that my system told me now unclean shutdown detected and started a parity check again because I just rebooted after the syslog server setup to start with all logs after a boot and be sure. I know that the shutdown was unclean the time before when the server died but not now after I just clicked rebooting. I canceled the parity check yesterday. Is it because of that?

 

And I realized that I did one change but it was also days ago. Is it maybe the reason? I created a user script that is resizing the log partition because I'm tired of restarting when the NGINX error occurs out of nowhere:

 

#!/bin/bash
#description=This script sets the log partition to 4GB.
#name=Resize unRAID log partition
mount -o remount,size=4048m /var/log

 

Link to comment
50 minutes ago, sasbro97 said:

Does the syslog server just record the system log? Because I had a look on the logs before it crashed and everything was fine. Sometimes before there was this typical NGINX bug that was filling up the log like hell.

Yes.   The point is to get the log right up to the point immediately before the crash.

 

51 minutes ago, sasbro97 said:

I'm wondering that my system told me now unclean shutdown detected and started a parity check again because I just rebooted after the syslog server setup to start with all logs after a boot and be sure. I know that the shutdown was unclean the time before when the server died but not now after I just clicked rebooting. I canceled the parity check yesterday. Is it because of that?

You can get an unclean shutdown even when simply clicking the Reboot button on the main tab if Unraid was unable to stop the array successfully.   It should be irrelevant what happened earlier.    Have you confirmed that you can stop the array successfully before hitting Reboot within the timeouts set as described here in the online documentation accessible via the Manual link at the bottom of the Unraid GUI.  In addition every forum page has a DOCS link at the top and a Documentation link at the bottom.   The Unraid OS->Manual section covers most aspects of the current Unraid release.

Link to comment
Filesystem      Size  Used Avail Use% Mounted on
rootfs           32G  540M   31G   2% /

It happened again and its not rootfs.

 

It started at 16:05 when my Nextcloud was going down again. Always the first thing that happens. I can see obviously in the logs that this messages are repeating endlessly but I don't know what they mean and what is causing them:

Feb  1 16:02:56 UNRAID-Server kernel: br-71456806b542: port 42(vethb0c30bd) entered disabled state
Feb  1 16:02:56 UNRAID-Server kernel: veth6b2d3ed: renamed from eth0
Feb  1 16:02:56 UNRAID-Server kernel: br-71456806b542: port 42(vethb0c30bd) entered disabled state
Feb  1 16:02:56 UNRAID-Server kernel: device vethb0c30bd left promiscuous mode
Feb  1 16:02:56 UNRAID-Server kernel: br-71456806b542: port 42(vethb0c30bd) entered disabled state
Feb  1 16:02:56 UNRAID-Server kernel: br-71456806b542: port 42(vethb02ff3a) entered blocking state
Feb  1 16:02:56 UNRAID-Server kernel: br-71456806b542: port 42(vethb02ff3a) entered disabled state
Feb  1 16:02:56 UNRAID-Server kernel: device vethb02ff3a entered promiscuous mode
Feb  1 16:02:56 UNRAID-Server kernel: br-71456806b542: port 42(vethb02ff3a) entered blocking state
Feb  1 16:02:56 UNRAID-Server kernel: br-71456806b542: port 42(vethb02ff3a) entered forwarding state
Feb  1 16:02:57 UNRAID-Server kernel: eth0: renamed from vethc7012bb
Feb  1 16:02:57 UNRAID-Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethb02ff3a: link becomes ready
Feb  1 16:06:07 UNRAID-Server kernel: br-71456806b542: port 42(vethb02ff3a) entered disabled state
Feb  1 16:06:07 UNRAID-Server kernel: vethc7012bb: renamed from eth0
Feb  1 16:06:07 UNRAID-Server kernel: br-71456806b542: port 42(vethb02ff3a) entered disabled state
Feb  1 16:06:07 UNRAID-Server kernel: device vethb02ff3a left promiscuous mode
Feb  1 16:06:07 UNRAID-Server kernel: br-71456806b542: port 42(vethb02ff3a) entered disabled state
Feb  1 16:06:07 UNRAID-Server kernel: br-71456806b542: port 42(veth08927d3) entered blocking state
Feb  1 16:06:07 UNRAID-Server kernel: br-71456806b542: port 42(veth08927d3) entered disabled state
Feb  1 16:06:07 UNRAID-Server kernel: device veth08927d3 entered promiscuous mode
Feb  1 16:06:07 UNRAID-Server kernel: br-71456806b542: port 42(veth08927d3) entered blocking state
Feb  1 16:06:07 UNRAID-Server kernel: br-71456806b542: port 42(veth08927d3) entered forwarding state
Feb  1 16:06:08 UNRAID-Server kernel: eth0: renamed from vethd074a55

Then 50 minutes later I can see a big error. I just let the system do and running on 100% CPU:

 

Feb  1 16:54:00 UNRAID-Server kernel: ------------[ cut here ]------------
Feb  1 16:54:00 UNRAID-Server kernel: NETDEV WATCHDOG: eth0 (e1000e): transmit queue 0 timed out
Feb  1 16:54:00 UNRAID-Server kernel: WARNING: CPU: 10 PID: 76 at net/sched/sch_generic.c:525 dev_watchdog+0x14e/0x1c0
Feb  1 16:54:00 UNRAID-Server kernel: Modules linked in: wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap ipvlan xt_nat xt_tcpudp veth xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype br_netfilter algif_hash algif_skcipher af_alg cmac bnep xfs md_mod tcp_diag inet_diag nct6775 nct6775_core hwmon_vid ip6table_filter ip6_tables iptable_filter ip_tables x_tables efivarfs af_packet 8021q garp mrp bridge stp llc bonding tls zfs(PO) i915 intel_rapl_msr intel_rapl_common x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel zunicode(PO) zzstd(O) kvm zlua(O) iosf_mbi drm_buddy i2c_algo_bit ttm drm_display_helper zavl(PO) crct10dif_pclmul drm_kms_helper crc32_pclmul crc32c_intel btusb ghash_clmulni_intel btrtl
Feb  1 16:54:00 UNRAID-Server kernel: sha512_ssse3 btbcm sha256_ssse3 btintel sha1_ssse3 icp(PO) aesni_intel drm bluetooth crypto_simd cryptd zcommon(PO) mei_hdcp mei_pxp znvpair(PO) intel_gtt rapl spl(O) ecdh_generic cdc_acm ecc tpm_crb intel_cstate wmi_bmof mpt3sas intel_uncore nvme i2c_i801 agpgart mei_me syscopyarea sysfillrect ahci i2c_smbus raid_class sysimgblt e1000e nvme_core i2c_core mei scsi_transport_sas libahci video vmd thermal fb_sys_fops fan tpm_tis tpm_tis_core wmi tpm intel_pmc_core backlight acpi_pad acpi_tad button unix
Feb  1 16:54:00 UNRAID-Server kernel: CPU: 10 PID: 76 Comm: ksoftirqd/10 Tainted: P           O       6.1.64-Unraid #1
Feb  1 16:54:00 UNRAID-Server kernel: Hardware name: ASUS System Product Name/PRIME Z790M-PLUS D4, BIOS 0810 02/22/2023
Feb  1 16:54:00 UNRAID-Server kernel: RIP: 0010:dev_watchdog+0x14e/0x1c0
Feb  1 16:54:00 UNRAID-Server kernel: Code: a4 c5 00 00 75 26 48 89 ef c6 05 a8 a4 c5 00 01 e8 59 23 fc ff 44 89 f1 48 89 ee 48 c7 c7 58 80 15 82 48 89 c2 e8 ab 73 93 ff <0f> 0b 48 89 ef e8 32 fb ff ff 48 8b 83 88 fc ff ff 48 89 ef 44 89
Feb  1 16:54:00 UNRAID-Server kernel: RSP: 0018:ffffc90000413da8 EFLAGS: 00010282
Feb  1 16:54:00 UNRAID-Server kernel: RAX: 0000000000000000 RBX: ffff888107170448 RCX: 0000000000000027
Feb  1 16:54:00 UNRAID-Server kernel: RDX: 0000000000000103 RSI: ffffffff820d7e01 RDI: 00000000ffffffff
Feb  1 16:54:00 UNRAID-Server kernel: RBP: ffff888107170000 R08: 0000000000000000 R09: ffffffff829513f0
Feb  1 16:54:00 UNRAID-Server kernel: R10: 00003fffffffffff R11: ffff88907f7d10da R12: 0000000000000000
Feb  1 16:54:00 UNRAID-Server kernel: R13: ffff88810717039c R14: 0000000000000000 R15: 0000000000000001
Feb  1 16:54:00 UNRAID-Server kernel: FS:  0000000000000000(0000) GS:ffff88903f480000(0000) knlGS:0000000000000000
Feb  1 16:54:00 UNRAID-Server kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Feb  1 16:54:00 UNRAID-Server kernel: CR2: 00007fff8f304ff8 CR3: 0000000974496000 CR4: 0000000000750ee0
Feb  1 16:54:00 UNRAID-Server kernel: PKRU: 55555554
Feb  1 16:54:00 UNRAID-Server kernel: Call Trace:
Feb  1 16:54:00 UNRAID-Server kernel: <TASK>
Feb  1 16:54:00 UNRAID-Server kernel: ? __warn+0xab/0x122
Feb  1 16:54:00 UNRAID-Server kernel: ? report_bug+0x109/0x17e
Feb  1 16:54:00 UNRAID-Server kernel: ? dev_watchdog+0x14e/0x1c0
Feb  1 16:54:00 UNRAID-Server kernel: ? handle_bug+0x41/0x6f
Feb  1 16:54:00 UNRAID-Server kernel: ? exc_invalid_op+0x13/0x60
Feb  1 16:54:00 UNRAID-Server kernel: ? asm_exc_invalid_op+0x16/0x20
Feb  1 16:54:00 UNRAID-Server kernel: ? dev_watchdog+0x14e/0x1c0
Feb  1 16:54:00 UNRAID-Server kernel: ? dev_watchdog+0x14e/0x1c0
Feb  1 16:54:00 UNRAID-Server kernel: ? psched_ppscfg_precompute+0x57/0x57
Feb  1 16:54:00 UNRAID-Server kernel: ? psched_ppscfg_precompute+0x57/0x57
Feb  1 16:54:00 UNRAID-Server kernel: call_timer_fn+0x6c/0x10d
Feb  1 16:54:00 UNRAID-Server kernel: __run_timers+0x144/0x184
Feb  1 16:54:00 UNRAID-Server kernel: ? _raw_spin_unlock+0x14/0x29
Feb  1 16:54:00 UNRAID-Server kernel: ? raw_spin_rq_unlock_irq+0x5/0x10
Feb  1 16:54:00 UNRAID-Server kernel: ? finish_task_switch.isra.0+0x140/0x218
Feb  1 16:54:00 UNRAID-Server kernel: run_timer_softirq+0x2b/0x43
Feb  1 16:54:00 UNRAID-Server kernel: __do_softirq+0x126/0x288
Feb  1 16:54:00 UNRAID-Server kernel: run_ksoftirqd+0x29/0x38
Feb  1 16:54:00 UNRAID-Server kernel: smpboot_thread_fn+0x1b6/0x1d3
Feb  1 16:54:00 UNRAID-Server kernel: ? find_next_bit+0x5/0x5
Feb  1 16:54:00 UNRAID-Server kernel: kthread+0xe4/0xef
Feb  1 16:54:00 UNRAID-Server kernel: ? kthread_complete_and_exit+0x1b/0x1b
Feb  1 16:54:00 UNRAID-Server kernel: ret_from_fork+0x1f/0x30
Feb  1 16:54:00 UNRAID-Server kernel: </TASK>
Feb  1 16:54:00 UNRAID-Server kernel: ---[ end trace 0000000000000000 ]---
Feb  1 16:54:00 UNRAID-Server kernel: e1000e 0000:00:1f.6 eth0: Reset adapter unexpectedly
Feb  1 16:54:00 UNRAID-Server kernel: bond0: (slave eth0): link status definitely down, disabling slave
Feb  1 16:54:00 UNRAID-Server kernel: device eth0 left promiscuous mode
Feb  1 16:54:00 UNRAID-Server kernel: bond0: now running without any active interface!
Feb  1 16:54:00 UNRAID-Server kernel: br0: port 1(bond0) entered disabled state
Feb  1 16:54:01 UNRAID-Server dhcpcd[1366]: br0: carrier lost
Feb  1 16:54:01 UNRAID-Server avahi-daemon[22779]: Withdrawing address record for 192.168.178.29 on br0.
Feb  1 16:54:01 UNRAID-Server avahi-daemon[22779]: Leaving mDNS multicast group on interface br0.IPv4 with address 192.168.178.29.
Feb  1 16:54:01 UNRAID-Server avahi-daemon[22779]: Interface br0.IPv4 no longer relevant for mDNS.
Feb  1 16:54:01 UNRAID-Server dhcpcd[1366]: br0: deleting route to 192.168.178.0/24
Feb  1 16:54:01 UNRAID-Server dhcpcd[1366]: br0: deleting default route via 192.168.178.1
Feb  1 16:54:03 UNRAID-Server ntpd[1525]: Deleting interface #1 *multiple*, 192.168.178.29#123, interface stats: received=298, sent=300, dropped=0, active_time=26778 secs
Feb  1 16:54:03 UNRAID-Server ntpd[1525]: 216.239.35.12 local addr 192.168.178.29 -> <null>
Feb  1 16:54:03 UNRAID-Server ntpd[1525]: 216.239.35.8 local addr 192.168.178.29 -> <null>
Feb  1 16:54:03 UNRAID-Server ntpd[1525]: 216.239.35.4 local addr 192.168.178.29 -> <null>
Feb  1 16:54:03 UNRAID-Server ntpd[1525]: 216.239.35.0 local addr 192.168.178.29 -> <null>
Feb  1 16:54:04 UNRAID-Server kernel: e1000e 0000:00:1f.6 eth0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Feb  1 16:54:04 UNRAID-Server dhcpcd[1366]: br0: carrier acquired
Feb  1 16:54:04 UNRAID-Server kernel: bond0: (slave eth0): link status definitely up, 1000 Mbps full duplex
Feb  1 16:54:04 UNRAID-Server kernel: bond0: (slave eth0): making interface the new active one
Feb  1 16:54:04 UNRAID-Server kernel: device eth0 entered promiscuous mode
Feb  1 16:54:04 UNRAID-Server kernel: bond0: active interface up!
Feb  1 16:54:04 UNRAID-Server kernel: br0: port 1(bond0) entered blocking state
Feb  1 16:54:04 UNRAID-Server kernel: br0: port 1(bond0) entered forwarding state
Feb  1 16:54:05 UNRAID-Server dhcpcd[1366]: br0: rebinding lease of 192.168.178.29
Feb  1 16:54:05 UNRAID-Server dhcpcd[1366]: br0: probing address 192.168.178.29/24
Feb  1 16:54:06 UNRAID-Server ntpd[1525]: Listen normally on 2 shim-br0 192.168.178.29:123
Feb  1 16:54:06 UNRAID-Server ntpd[1525]: new interface(s) found: waking up resolver
Feb  1 16:54:09 UNRAID-Server dhcpcd[1366]: br0: leased 192.168.178.29 for 864000 seconds
Feb  1 16:54:09 UNRAID-Server dhcpcd[1366]: br0: adding route to 192.168.178.0/24
Feb  1 16:54:09 UNRAID-Server dhcpcd[1366]: br0: adding default route via 192.168.178.1
Feb  1 16:54:09 UNRAID-Server avahi-daemon[22779]: Joining mDNS multicast group on interface br0.IPv4 with address 192.168.178.29.
Feb  1 16:54:09 UNRAID-Server avahi-daemon[22779]: New relevant interface br0.IPv4 for mDNS.
Feb  1 16:54:09 UNRAID-Server avahi-daemon[22779]: Registering new address record for 192.168.178.29 on br0.IPv4.
Feb  1 16:54:10 UNRAID-Server network: hook services: interface=br0, reason=BOUND, protocol=dhcp
Feb  1 16:54:10 UNRAID-Server network: update services: 30s

Then the first logs started repeating again.

 

I tried to stop Docker because I thought it is the reason but the system was still completely unresponsive and 100% CPU load and lead to red CPU error messages and also something like this:

Feb 1 17:55:20 UNRAID-Server kernel: out_of_memory+0x3b3/0x3e5

It only stopped as I stopped the array. Now everything is normal. No logs are created.

 

Here are again my diagnostics. I have not restarted yet so they should be all there anyway.

unraid-server-diagnostics-20240201-1939.zip

 

EDIT: I remember what probably the last thing was I added. The Docker container netdata even though it is a while ago. Because I have seen this thread here: 

I would leave this one stopped. Maybe port 19999 is a problem for Unraid? I'm running it on the host.

 

 

Edited by sasbro97
Link to comment

Unfortunately there's nothing relevant logged, this usually points to a hardware issue, one thing you can try is to boot the server in safe mode with all docker/VMs disabled, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.

Link to comment

@JorgeB can you or somebody else help on how to find out how Nextcloud is able to crashing everything? It's 100% causing it. It happened again today after 2 days. Without everything was running fine. I also had an ZFS error once as I tried to update my Docker image of the container. I already capped the RAM to 6GB but I did not cap the CPU what could be the issue too. I mean it shouldn't happen anyway but I guess the full access on the whole CPU is allowing the container to do this.

 

How can I monitor this? Or how can I find out what's wrong. I would have never thought that a single Docker container would be able to do this. I'm still not sure if the Docker installation is corrupt.

Link to comment
  • 3 weeks later...
  • 1 month later...

I'm experiencing a similar issue with Nextcloud and Unraid. My log files don't produce anything conclusive to help me identify the problem. I believe the issue has to do with generating previews of images and files in Nextcloud. But, nothing conclusive so far. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.