MarekNosek

Members
  • Posts

    26
  • Joined

  • Last visited

Everything posted by MarekNosek

  1. Hi, what do you mean by " point it to the existing vdisk.img " ? Thanks in advance for the help.
  2. Has no effect. I have to manually stop and start the docker. Any tips ?
  3. Hi guys, I have dockers randomly crashing on me. I have not found any pattern. Firstly I thought it was a problem of NEXTCLOUD or its database and that is why I have installed UptimeKuma to monitor it. MariaDB is 100% up and there is no problem (to my knowledge). But both nextclouds and the uptimekuma containers become unresponsive, randomly once a week-esh. The latest crash was my NEXTCLOUDNNFG, today at 4 in the morning. The diagnostics attached are just after I have realised it. I am not really sure where to look for a problem anymore. Any recommendations what to try are welcomed. Logs from uptimekuma seem to be ok. Logs from nextcloud were showing this in the morning: zend_mm_heap corrupted Error: Cannot use object of type stdClass as array in /config/www/nextcloud/lib/private/Security/VerificationToken/CleanUpJob.php:63 zend_mm_heap corrupted Error: Cannot use object of type stdClass as array in /config/www/nextcloud/lib/private/Security/VerificationToken/CleanUpJob.php:63 thanks guys darwin-diagnostics-20211109-0720.zip
  4. There is nothing unusual. It shows the last checks done. No error. On the other hand docker stats show O% while the container is "officially running". and the "docker container tops show : "Error response from daemon: container is not running" No doubt. I am not even trying to blame anyone else . I would just love to make it work as it is supposed to be. It just drives me nuts that I have no idea where to look for problems. Nov 7 23:40:04 Darwin flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup.php update ### [PREVIOUS LINE REPEATED 91 TIMES] ### Nov 8 01:16:01 Darwin kernel: traps: monitor[4838] general protection fault ip:154eae741eb2 sp:7fff7b56f728 error:0 in libglib-2.0.so.0.6600.2[154eae720000+82000] Nov 8 01:16:16 Darwin flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup.php update Nov 8 01:16:16 Darwin crond[2906]: exit status 139 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null Nov 8 01:16:16 Darwin sSMTP[5518]: Creating SSL connection to host Nov 8 01:16:16 Darwin sSMTP[5518]: SSL connection using TLS_AES_256_GCM_SHA384 Nov 8 01:16:16 Darwin sSMTP[5518]: Authorization failed (535 5.7.8 https://support.google.com/mail/?p=BadCredentials h1sm14329134wmb.7 - gsmtp) Nov 8 01:17:16 Darwin flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup.php update ### [PREVIOUS LINE REPEATED 70 TIMES] ### Nov 8 02:30:13 Darwin crond[2906]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Nov 8 02:30:24 Darwin flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup.php update ### [PREVIOUS LINE REPEATED 381 TIMES] ### Nov 8 09:05:01 Darwin kernel: php7[2654]: segfault at 152ca3cc2420 ip 0000152c937a1e72 sp 00007fff413831d0 error 4 in opcache.so[152c9377e000+5b000] Nov 8 09:05:01 Darwin kernel: Code: 2f 3c 02 74 7b 0f b6 43 1f 3c 04 75 a3 48 63 4b 10 4c 89 c2 48 83 e9 50 48 c1 f9 04 89 c8 48 d3 e2 c1 e8 06 48 09 14 c7 eb 8e <66> 0f 1f 44 00 00 48 63 4b 0c 0f b6 73 1c 48 83 e9 50 48 c1 f9 04 Nov 8 09:05:12 Darwin flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup.php update ### [PREVIOUS LINE REPEATED 517 TIMES] ### Nov 8 18:01:41 Darwin kernel: docker0: port 6(veth0e7656b) entered disabled state Nov 8 18:01:41 Darwin kernel: veth03f5459: renamed from eth0 Nov 8 18:01:41 Darwin avahi-daemon[6821]: Interface veth0e7656b.IPv6 no longer relevant for mDNS. Nov 8 18:01:41 Darwin avahi-daemon[6821]: Leaving mDNS multicast group on interface veth0e7656b.IPv6 with address fe80::a812:e3ff:fe3b:8a1f. Nov 8 18:01:41 Darwin kernel: docker0: port 6(veth0e7656b) entered disabled state Nov 8 18:01:41 Darwin kernel: device veth0e7656b left promiscuous mode Nov 8 18:01:41 Darwin kernel: docker0: port 6(veth0e7656b) entered disabled state Nov 8 18:01:41 Darwin avahi-daemon[6821]: Withdrawing address record for fe80::a812:e3ff:fe3b:8a1f on veth0e7656b. Nov 8 18:01:41 Darwin kernel: docker0: port 6(veth7914760) entered blocking state Nov 8 18:01:41 Darwin kernel: docker0: port 6(veth7914760) entered disabled state Nov 8 18:01:41 Darwin kernel: device veth7914760 entered promiscuous mode Nov 8 18:01:41 Darwin kernel: docker0: port 6(veth7914760) entered blocking state Nov 8 18:01:41 Darwin kernel: docker0: port 6(veth7914760) entered forwarding state Nov 8 18:01:41 Darwin kernel: eth0: renamed from veth11cb461 Nov 8 18:01:41 Darwin kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth7914760: link becomes ready Nov 8 18:01:42 Darwin avahi-daemon[6821]: Joining mDNS multicast group on interface veth7914760.IPv6 with address fe80::b465:25ff:fe8b:a5ff. Nov 8 18:01:42 Darwin avahi-daemon[6821]: New relevant interface veth7914760.IPv6 for mDNS. Nov 8 18:01:42 Darwin avahi-daemon[6821]: Registering new address record for fe80::b465:25ff:fe8b:a5ff on veth7914760.*. Nov 8 18:02:16 Darwin flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup.php update I do appreciate your help. If anything seems odd, let me know. Thank you bunch !!!
  5. Right now I am running the 22.2.0 and the latest mariadb - the problem continues... I have installed Uptime Kuma to monitor the problem a bit and it seems like the databases are 100% reachable, though. On the other hand also the Uptime Kuma container becomes unreachable from time to time. I start to suspect that there is a problem somewhere in the docker app itself. ?!
  6. But as you said I am suspecting the problem to be elsewhere, because I have nextcloud docker containers "crashing" on me from time to time as well. And I am kinda lost and do not know how to troubleshoot it .
  7. Thanks for the tip. I will try if it changes anything.
  8. Hi, I have installed Uptime Kuma couple weeks ago. It randomly gets "unhealthy" label and becomes unreachable. Sometimes after 2 hours, sometimes in a week. Any ideas what this could cause ? Any tips welcomed.
  9. in the meantime, any tips how to make it more "stable" ? There is max 5 people connected to it during the day and the data transfer is minimum.
  10. Thank for the tip. I only get these warnings, that do not really correlate to the 502s. Also one more think to note ... the only way to make it work again is to restart mariadb and nextcloud. I will test if whether i restart only one it also helps ... next time this error shows up.
  11. Hi guys, I have two nextcloud dockers setup and it all seems to run fine, but once a day the NEXTCLOUD #2 gets the 502 error. I have the following unraid docker running: - NginxProxyManager `autostart wait 0` - mariadb `autostart wait 5` on port 3306 - mariadb2 `autostart wait 20` on port 3307 - nextcloud `autostart wait 20` on port 444 - nextcloud2 `autostart wait 20` on port 448 All on bridged network. Each nextcloud is directed to via a proxy from Nginx. Each nextcloud is linked to mariaDB via separate port. Most of the time all is running fine, but from time to time I encounter the Maybe once a day... Steps to replicate it: 1. I assumed that it was a problem of dockers starting too fast. 2. I tried to have a wait time before they start. But problem appears again. 3. When I **stop** mariaDB2 and and NEXTCLOUD2 and restart it after few seconds, it works again. Could anyone suggest where to look for problems? ---------------- Nextcloud version: `Nextcloud 22.1.1` Operating system and version : `UNRAID Version: 6.9.2 ` Nginx version: `v2.9.9 © 2021` The output of your Nextcloud log in **Admin > Logging**: This is the only error warning I get since it has worked:
  12. Hi guys, I know it is long time after this post, but has anyone of you experience 502 errors on the second nextcloud from time to time? For me it all works fine, but once a day it redirects to 502 bad gateway for some reason.
  13. Hi guys, I have two nextcloud dockers setup and it all seems to run fine, but once a day the NEXTCLOUD #2 gets the 502 error. I have the following unraid docker running: - NginxProxyManager `autostart wait 0` - mariadb `autostart wait 5` on port 3306 - mariadb2 `autostart wait 20` on port 3307 - nextcloud `autostart wait 20` on port 444 - nextcloud2 `autostart wait 20` on port 448 All on bridged network. Each nextcloud is directed to via a proxy from Nginx. Each nextcloud is linked to mariaDB via separate port. Most of the time all is running fine, but from time to time I encounter the Maybe once a day... Steps to replicate it: 1. I assumed that it was a problem of dockers starting too fast. 2. I tried to have a wait time before they start. But problem appears again. 3. When I **stop** mariaDB2 and and NEXTCLOUD2 and restart it after few seconds, it works again. Could anyone suggest where to look for problems? ---------------- Nextcloud version: `Nextcloud 22.1.1` Operating system and version : `UNRAID Version: 6.9.2 ` Nginx version: `v2.9.9 © 2021` The output of your Nextcloud log in **Admin > Logging**: This is the only error warning I get since it has worked:
  14. OK so a tower with a newer CPU (Intel® Core™ i5-7500 CPU @ 3.40GHz) has a very similar results. I copied the same dummy file to a test share onto the same cache drive. ~ % rsync -a --progress --stats --human-readable XXXXXX/your-file-name.zip /Volumes/test building file list ... 1 file to consider your-file-name.zip 3.22G 100% 53.48MB/s 0:00:57 (xfer#1, to-check=0/1) Number of files: 1 Number of files transferred: 1 Total file size: 3.22G bytes Total transferred file size: 3.22G bytes Literal data: 3.22G bytes Matched data: 0 bytes File list size: 86 File list generation time: 0.002 seconds File list transfer time: 0.000 seconds Total bytes sent: 3.22G Total bytes received: 42 sent 3.22G bytes received 42 bytes 54.14M bytes/sec total size is 3.22G speedup is 1.00 So I am guessing there must be an issue elsewhere.
  15. yes it is MacOS is there any "setting" to make it better that you know of? Of topic - I have ordered a pre-built PC with better CPU specs to test whether it is bottlenecking because of the the single thread performance. I will report soon.
  16. Thank you SimonF for the tip, The CPU is running in 3000s MHz when copying (often dipping to 1800-2500), but it seems to be a bit faster, at least. What does not make sense to me is the inconsistency in the speeds, the fact that the RAM is almost all the time full with cached files and that the ssd writes only occasionally. Is it correct that the RAM does not "clear the cache" after the transfer to ssd? Lame comment (maybe), but it feels that the speed is dropping when RAM hits the ceiling and all is halted. These are statistics while copying many streams at the same time (I get the fastest results this way - around 70 MB/s):
  17. which leads me to a potential discovery... do I understand it right that the X3470 I bought is underclocked? Every 1.0s: grep MHz /proc/cpuinfo cpu MHz : 1237.918 cpu MHz : 1442.808 cpu MHz : 2354.694 cpu MHz : 2413.491 cpu MHz : 1297.061 cpu MHz : 1386.903 cpu MHz : 1753.300 cpu MHz : 1744.538
  18. in the morning I am reaching higher: Time: Mon, 26 Apr 2021 05:29:35 GMT Accepted connection from 10.10.10.100, port 64020 Cookie: Mareks-MacBook-Pro.local.1619414975. TCP MSS: 0 (default) [ 5] local 10.10.10.10 port 5201 connected to 10.10.10.100 port 64021 Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 112 MBytes 936 Mbits/sec [ 5] 1.00-2.00 sec 112 MBytes 940 Mbits/sec [ 5] 2.00-3.00 sec 112 MBytes 940 Mbits/sec [ 5] 3.00-4.00 sec 112 MBytes 941 Mbits/sec [ 5] 4.00-5.00 sec 111 MBytes 931 Mbits/sec [ 5] 5.00-6.00 sec 112 MBytes 941 Mbits/sec [ 5] 6.00-7.00 sec 112 MBytes 939 Mbits/sec [ 5] 7.00-8.00 sec 112 MBytes 941 Mbits/sec [ 5] 8.00-9.00 sec 112 MBytes 941 Mbits/sec [ 5] 9.00-10.00 sec 112 MBytes 940 Mbits/sec [ 5] 10.00-10.00 sec 492 KBytes 902 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate [ 5] (sender statistics not available) [ 5] 0.00-10.00 sec 1.09 GBytes 939 Mbits/sec receiver rcv_tcp_congestion bbr iperf 3.9 Linux Darwin 5.10.28-Unraid #1 SMP Wed Apr 7 08:23:18 PDT 2021 x86_64 ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- both ways : Connecting to host 10.10.10.100, port 5201 [ 5] local 10.10.10.10 port 60268 connected to 10.10.10.100 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 115 MBytes 964 Mbits/sec 0 305 KBytes [ 5] 1.00-2.00 sec 112 MBytes 939 Mbits/sec 0 308 KBytes [ 5] 2.00-3.00 sec 112 MBytes 940 Mbits/sec 0 311 KBytes [ 5] 3.00-4.00 sec 111 MBytes 933 Mbits/sec 0 303 KBytes [ 5] 4.00-5.00 sec 112 MBytes 944 Mbits/sec 0 328 KBytes [ 5] 5.00-6.00 sec 111 MBytes 933 Mbits/sec 0 322 KBytes [ 5] 6.00-7.00 sec 112 MBytes 944 Mbits/sec 0 305 KBytes [ 5] 7.00-8.00 sec 111 MBytes 933 Mbits/sec 0 311 KBytes [ 5] 8.00-9.00 sec 112 MBytes 944 Mbits/sec 0 311 KBytes [ 5] 9.00-10.00 sec 111 MBytes 933 Mbits/sec 0 5.66 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec 0 sender [ 5] 0.00-10.00 sec 1.09 GBytes 938 Mbits/sec receiver iperf Done. So I m questioning to try to OC the CPU ( it is running very low temps all the time).
  19. Thank you very much nonetheless. I will keep trying. Maybe I stumble upon some solution eventually.
  20. speeds to Cache and disk1 are similarly around 55 MB/s (with the same 3GB dummy file) I also tested multiple transfers(3) at the same time of that file to different shares and that tops around 75 MB/s
  21. root@Darwin:~# iperf3 -s -V iperf 3.9 Linux Darwin 5.10.28-Unraid #1 SMP Wed Apr 7 08:23:18 PDT 2021 x86_64 ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Time: Sun, 25 Apr 2021 10:26:52 GMT Accepted connection from 10.10.10.100, port 60807 Cookie: Mareks-MacBook-Pro.local.1619346412. TCP MSS: 0 (default) [ 5] local 10.10.10.10 port 5201 connected to 10.10.10.100 port 60808 Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 59.4 MBytes 498 Mbits/sec [ 5] 1.00-2.00 sec 85.2 MBytes 715 Mbits/sec [ 5] 2.00-3.00 sec 86.7 MBytes 727 Mbits/sec [ 5] 3.00-4.00 sec 86.9 MBytes 729 Mbits/sec [ 5] 4.00-5.00 sec 85.9 MBytes 720 Mbits/sec [ 5] 5.00-6.00 sec 83.5 MBytes 701 Mbits/sec [ 5] 6.00-7.00 sec 84.6 MBytes 710 Mbits/sec [ 5] 7.00-8.00 sec 86.1 MBytes 723 Mbits/sec [ 5] 8.00-9.00 sec 85.5 MBytes 717 Mbits/sec [ 5] 9.00-10.00 sec 87.1 MBytes 731 Mbits/sec [ 5] 10.00-10.00 sec 32.5 KBytes 336 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - Test Complete. Summary Results: [ ID] Interval Transfer Bitrate [ 5] (sender statistics not available) [ 5] 0.00-10.00 sec 831 MBytes 697 Mbits/sec receiver rcv_tcp_congestion bbr iperf 3.9 Linux Darwin 5.10.28-Unraid #1 SMP Wed Apr 7 08:23:18 PDT 2021 x86_64 ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- I ran "iperf3 -c 10.10.10.10 -P 1" on the client side, if that is what you meant. in the morning I had Bitrate at 900s Mbits/sec (iperf3 -c 10.10.10.10)
  22. no, there is no detectable difference tested with a 3GB dummy file
  23. Hi, my setup : Version: 6.9.2 (initiated on Version: 6.9.1) 1 gig wired network connection 2x 480GB cache SSD in Raid 3x HDD (one parity) Seagate IronWolf 16GB ECC RAM Xeon X3470 all drives set to ahci in bios dual LAN connected to my supermicro MB set to balance-alb I also tried with single cable Right now I have it set to "reconstruct write". Though, I guess that should not matter with cache drives. I am getting max transfer speeds of 50MB/s when lucky. Majority of times it is going 5-10 MB/s. Troubleshooting: I can exclude the network as I can saturate that (tested wit istat, iperf, OpenSpeedTest) There is no difference, if I have cache or not enabled for a share. Both top around ~50 MB/s. CPU seems OK (only rarely 1 core spikes to 100%) RAM seems OK When mover runs HDD can reach ~150 MB/s I am desperate for any kind of help, tips and diagnostics that could pinpoint the bottleneck. Thank you. darwin-diagnostics-20210425-0827.zip
  24. Hi guys, after some troubleshooting, I found the problem. Rather a misunderstanding. I assumed that once both, the client and Unraid, are running and active, there would be a handshake (a notion of being connected). It was explained to me that unless I have a request from either side, there is not going to be any communication and thus no handshake. VPN is working as it should now. Thanks for the tips though.