KriS

Members
  • Posts

    6
  • Joined

  • Last visited

KriS's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hi, I don't see Download results. Progress bar is grow up when download is tested, but always show 0.00 as result. Upload speed is ok. After ~20 tests I see big files an patch \appdata\librespeed\log\nginx access.log = 36MB error.log = 82MB Every test adds few MB for each file. Access logs: 10.xx.xx.xx - - [28/Feb/2021:14:33:07 +0100] "GET /backend/garbage.php?r=0.46221209264303265&ckSize=100 HTTP/1.1" 500 5 "http://10.xx.xx.xx:3002/speedtest_worker.js?r=0.6029260239670162" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.192 Safari/537.36" Error logs: #0 /usr/share/webapps/librespeed/backend/garbage.php(53): getChunkCount() #1 {main} thrown in /usr/share/webapps/librespeed/backend/garbage.php on line 15" while reading response header from upstream, client: 10.xx.xx.xx, server: _, request: "GET /backend/garbage.php?r=0.8301026199060471&ckSize=100 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "10.xx.xx.xx:3002", referrer: "http://10.xx.xx.xx:3002/speedtest_worker.js?r=0.6029260239670162" 2021/02/28 14:33:07 [error] 342#342: *4613 FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught Error: Call to undefined function ctype_digit() in /usr/share/webapps/librespeed/backend/garbage.php:15 Stack trace: Any advice?
  2. Hi, from 6.8.x version I've problem with CPU usage. After few days from boot one core is utilized at 100%... sometimes it's happen after 5 days.. sometimes after 2 weeks. The case before it was core 10, but now after about 2 weeks from boot it was core 15. I didn't restart server this time, and after two weeks more, one more core was utilized at 100%, so I did have core 11, core 15 used by something and one more at ~60%. I start digging and found this: top - 01:30:06 up 43 days, 15:23, 4 users, load average: 3.58, 3.08, 3.26 Tasks: 3 total, 0 running, 3 sleeping, 0 stopped, 0 zombie %Cpu0 : 16.7 us, 1.4 sy, 0.0 ni, 81.6 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st %Cpu1 : 9.1 us, 0.7 sy, 0.0 ni, 90.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu2 : 19.5 us, 1.7 sy, 0.0 ni, 77.4 id, 1.0 wa, 0.0 hi, 0.3 si, 0.0 st %Cpu3 : 1.7 us, 1.0 sy, 0.0 ni, 97.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu4 : 30.0 us, 2.0 sy, 0.0 ni, 67.0 id, 0.7 wa, 0.0 hi, 0.3 si, 0.0 st %Cpu5 : 9.7 us, 1.0 sy, 0.0 ni, 89.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu6 : 7.1 us, 10.2 sy, 0.0 ni, 82.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu7 : 10.1 us, 0.7 sy, 0.0 ni, 89.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu8 : 2.0 us, 5.0 sy, 0.0 ni, 89.6 id, 3.3 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu9 : 8.0 us, 4.3 sy, 0.0 ni, 82.7 id, 4.7 wa, 0.0 hi, 0.3 si, 0.0 st %Cpu10 : 3.0 us, 4.1 sy, 0.0 ni, 90.5 id, 2.4 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu11 : 2.0 us, 2.7 sy, 0.0 ni, 0.0 id, 95.3 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu12 : 5.3 us, 4.3 sy, 0.0 ni, 87.0 id, 3.0 wa, 0.0 hi, 0.3 si, 0.0 st %Cpu13 : 1.0 us, 3.0 sy, 0.0 ni, 93.3 id, 2.7 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu14 : 26.7 us, 23.3 sy, 0.0 ni, 49.3 id, 0.7 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu15 : 12.7 us, 10.7 sy, 0.0 ni, 0.0 id, 76.6 wa, 0.0 hi, 0.0 si, 0.0 st MiB Mem : 32160.7 total, 230.6 free, 22955.5 used, 8974.6 buff/cache MiB Swap: 0.0 total, 0.0 free, 0.0 used. 7836.4 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 20710 root 20 0 3830296 3.2g 22160 S 71.3 10.1 12117:42 /usr/bin/qemu-system-x86_64 -name guest=OPNsense,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-OPNsense/mast+ 20595 root 20 0 4995344 4.2g 22404 S 30.7 13.3 19528:28 /usr/bin/qemu-system-x86_64 -name guest=Windows 10,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-Windows 10/+ 7680 root 20 0 4863852 4.1g 22224 S 9.3 13.1 5473:36 /usr/bin/qemu-system-x86_64 -name guest=appliance,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-ap+ OPNsens/Windows it's for tests... not used and was at idle... only turned on. Third VM is used little bit, but CPU usage is at ~3% when I check it from VM site. So I turned off all VM, but CPU still was used... so I turned off docker also.. and still get hig cpu usage at cores 11/15. top - 02:03:49 up 43 days, 15:57, 4 users, load average: 3.00, 3.11, 3.38 Tasks: 363 total, 1 running, 362 sleeping, 0 stopped, 0 zombie %Cpu0 : 0.5 us, 0.0 sy, 0.0 ni, 99.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu2 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu3 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu4 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu5 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu6 : 0.5 us, 0.0 sy, 0.0 ni, 99.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu7 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu8 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu9 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu10 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu11 : 0.0 us, 0.0 sy, 0.0 ni, 0.0 id,100.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu12 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu13 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu14 : 0.5 us, 0.0 sy, 0.0 ni, 99.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu15 : 0.0 us, 0.0 sy, 0.0 ni, 0.0 id,100.0 wa, 0.0 hi, 0.0 si, 0.0 st MiB Mem : 32160.7 total, 22289.2 free, 680.0 used, 9191.5 buff/cache MiB Swap: 0.0 total, 0.0 free, 0.0 used. 30114.5 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5614 root 20 0 283636 4124 3352 S 1.1 0.0 315:48.99 emhttpd 781 root 20 0 0 0 0 I 0.5 0.0 0:01.72 kworker/3:1-events 15903 root 20 0 9120 5372 4852 S 0.5 0.0 0:00.37 sshd 32758 root 20 0 0 0 0 I 0.5 0.0 0:03.59 kworker/2:0-events So I turned off everything, not users connected and still CPU is used by something.. because I can't find what use my processor, I did try another cpu-tools. Iit's result from htop: (instead of top) 1 [ 0.0%] 5 [| 0.7%] 9 [ 0.0%] 13 [ 0.0%] 2 [ 0.0%] 6 [ 0.0%] 10 [ 0.0%] 14 [|| 1.3%] 3 [ 0.0%] 7 [ 0.0%] 11 [ 0.0%] 15 [ 0.0%] 4 [ 0.0%] 8 [ 0.0%] 12 [ 0.0%] 16 [ 0.0%] Mem[|||||||||||||||| 1.56G/31.4G] Tasks: 53, 21 thr; 1 running Swp[ 0K/0K] Load average: 3.00 3.02 3.24 Uptime: 43 days, 16:03:17 PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 5614 root 20 0 276M 4124 3352 S 1.3 0.0 5h15:50 /usr/local/sbin/emhttpd 5731 root 20 0 276M 4124 3352 S 1.3 0.0 4h50:57 /usr/local/sbin/emhttpd 14008 root 20 0 6432 4836 2872 R 0.7 0.0 0:00.55 htop 13996 root 20 0 102M 13076 7304 S 0.7 0.0 0:00.06 php-fpm: pool www 7736 root 20 0 1264M 40588 1060 S 0.7 0.1 0:00.40 /usr/local/sbin/shfs /mnt/user -disks 511 204 13293 root 20 0 102M 13080 7304 S 0.7 0.0 0:00.16 php-fpm: pool www 8508 root 20 0 1264M 40588 1060 S 0.7 0.1 0:00.33 /usr/local/sbin/shfs /mnt/user -disks 511 204 2539 root 20 0 1264M 40588 1060 S 0.7 0.1 0:00.57 /usr/local/sbin/shfs /mnt/user -disks 511 204 6021 root 20 0 1264M 40588 1060 S 0.0 0.1 14h02:16 /usr/local/sbin/shfs /mnt/user -disks 511 204 3161 root 20 0 1264M 40588 1060 S 0.0 0.1 0:00.53 /usr/local/sbin/shfs /mnt/user -disks 511 204 13254 root 20 0 102M 13084 7304 S 0.0 0.0 0:00.17 php-fpm: pool www 13971 root 20 0 102M 13076 7304 S 0.0 0.0 0:00.05 php-fpm: pool www 31448 root 20 0 1264M 40588 1060 S 0.0 0.1 0:00.72 /usr/local/sbin/shfs /mnt/user -disks 511 204 13360 root 20 0 102M 12888 7112 S 0.0 0.0 0:00.15 php-fpm: pool www 14069 root 20 0 102M 12532 6780 S 0.0 0.0 0:00.03 php-fpm: pool www 15903 root 20 0 9120 5372 4852 S 0.0 0.0 0:00.46 sshd: root@pts/3 7490 root 20 0 146M 8348 3792 S 0.0 0.0 21:14.61 nginx: worker process 5730 root 20 0 276M 4124 3352 S 0.0 0.0 15:01.49 /usr/local/sbin/emhttpd 30845 root 20 0 1264M 40588 1060 S 0.0 0.1 0:00.68 /usr/local/sbin/shfs /mnt/user -disks 511 204 5803 root 20 0 3788 2820 2548 S 0.0 0.0 26:37.57 /bin/bash /usr/local/emhttp/webGui/scripts/di 9541 root 20 0 1264M 40588 1060 S 0.0 0.1 0:00.30 /usr/local/sbin/shfs /mnt/user -disks 511 204 1880 root 20 0 1264M 40588 1060 S 0.0 0.1 0:00.54 /usr/local/sbin/shfs /mnt/user -disks 511 204 2171 ntp 20 0 77212 4352 3716 S 0.0 0.0 2:18.06 /usr/sbin/ntpd -g -u ntp:ntp 8509 root 20 0 1264M 40588 1060 S 0.0 0.1 0:00.35 /usr/local/sbin/shfs /mnt/user -disks 511 204 10756 root 20 0 1264M 40588 1060 S 0.0 0.1 0:00.28 /usr/local/sbin/shfs /mnt/user -disks 511 204 1967 root 20 0 8212 4436 1308 S 0.0 0.0 8:37.61 /sbin/haveged -w 1024 -v 1 -p /var/run/havege 1 root 20 0 2468 1840 1736 S 0.0 0.0 0:31.52 init 24860 root 20 0 5680 2228 0 S 0.0 0.0 0:00.00 /usr/sbin/rpc.mountd 24765 root 20 0 2524 1684 1580 S 0.0 0.0 0:00.00 /usr/sbin/inetd 24685 root 20 0 8988 3348 2840 S 0.0 0.0 0:00.00 /usr/sbin/sshd 15905 root 20 0 7344 4176 3264 S 0.0 0.0 0:00.05 -bash 13671 root 20 0 3848 2108 1776 S 0.0 0.0 0:03.70 /bin/bash /usr/local/emhttp/plugins/dynamix.s 13274 root 20 0 2448 756 696 S 0.0 0.0 0:00.00 sleep 300 12065 root 20 0 50600 10760 8088 S 0.0 0.0 0:00.00 /usr/sbin/winbindd -D 12267 root 20 0 51016 9472 6816 S 0.0 0.0 0:00.00 /usr/sbin/winbindd -D 12068 root 20 0 50968 15896 13312 S 0.0 0.0 0:00.00 /usr/sbin/winbindd -D 12062 root 20 0 2512 104 0 S 0.0 0.0 0:00.00 /usr/sbin/wsdd 12055 root 20 0 35340 6540 4380 S 0.0 0.0 0:00.01 /usr/sbin/nmbd -D 12050 root 20 0 59900 17552 14712 S 0.0 0.1 0:00.00 /usr/sbin/smbd -D 12054 root 20 0 50340 7284 4568 S 0.0 0.0 0:00.00 /usr/sbin/smbd -D 12053 root 20 0 50332 8140 5424 S 0.0 0.0 0:00.00 /usr/sbin/smbd -D 7489 root 20 0 145M 3812 28 S 0.0 0.0 0:00.00 nginx: master process /usr/sbin/nginx -c /etc 7483 root 20 0 13492 5132 3952 S 0.0 0.0 1:25.22 ttyd -d 0 -i /var/run/ttyd.sock login -f root 7447 root 20 0 99M 11800 6232 S 0.0 0.0 1:31.38 php-fpm: master process (/etc/php-fpm/php-fpm 11706 root 20 0 102M 13288 7432 S 0.0 0.0 0:00.05 php-fpm: pool www 13828 root 20 0 17296 14732 4104 D 0.0 0.0 0:00.24 lshw -xml -sanitize -quiet 6008 root 20 0 141M 744 280 S 0.0 0.0 0:02.62 /usr/local/sbin/shfs /mnt/user0 -disks 510 -o 6010 root 20 0 141M 744 280 S 0.0 0.0 0:01.30 /usr/local/sbin/shfs /mnt/user0 -disks 510 -o 6009 root 20 0 141M 744 280 S 0.0 0.0 0:01.31 /usr/local/sbin/shfs /mnt/user0 -disks 510 -o 5715 root 20 0 4716 108 0 S 0.0 0.0 0:00.00 /usr/sbin/avahi-dnsconfd -D 5704 avahi 20 0 6420 3288 2848 S 0.0 0.0 1:11.14 avahi-daemon: running [VDS.local] 5706 avahi 20 0 6024 264 0 S 0.0 0.0 0:00.00 avahi-daemon: chroot helper 5733 root 20 0 276M 4124 3352 S 0.0 0.0 0:00.00 /usr/local/sbin/emhttpd F1Help F2Setup F3SearchF4FilterF5Tree F6SortByF7Nice -F8Nice +F9Kill F10Quit So, nothing use my CPU.. and now Im confused... my cpu was used or not? When I reboot my server I see normal CPU usage ~3% at dashboard. server-diagnostics-20200525-0138(anony).zip
  3. Today found this post... I run iotop for few hours: Start -> Mon May 11 11:45:17 CEST 2020 End -> Mon May 11 18:49:54 CEST 2020 root@VDS:~# iotop -oa Total DISK READ : 0.00 B/s | Total DISK WRITE : 24.44 M/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 24.18 M/s TID PRIO USER DISK READ DISK WRITE> SWAPIN IO COMMAND 6169 be/0 root 65.20 M 100.45 G 0.00 % 1.06 % [loop2] 18510 be/4 root 58.20 M 1852.94 M 0.00 % 0.03 % shfs /mnt/user -disks 511 2048000000 -o noatime,allow_other -o remember=0 18491 be/4 root 59.39 M 1839.84 M 0.00 % 0.03 % shfs /mnt/user -disks 511 2048000000 -o noatime,allow_other -o remember=0 10221 be/4 root 59.02 M 1809.12 M 0.00 % 0.03 % shfs /mnt/user -disks 511 2048000000 -o noatime,allow_other -o remember=0 18515 be/4 root 57.02 M 1775.67 M 0.00 % 0.03 % shfs /mnt/user -disks 511 2048000000 -o noatime,allow_other -o remember=0 18601 be/4 root 58.44 M 1751.80 M 0.00 % 0.03 % shfs /mnt/user -disks 511 2048000000 -o noatime,allow_other -o remember=0 18600 be/4 root 60.01 M 1746.48 M 0.00 % 0.03 % shfs /mnt/user -disks 511 2048000000 -o noatime,allow_other -o remember=0 18492 be/4 root 59.45 M 1741.75 M 0.00 % 0.03 % shfs /mnt/user -disks 511 2048000000 -o noatime,allow_other -o remember=0 18502 be/4 root 58.53 M 1730.81 M 0.00 % 0.03 % shfs /mnt/user -disks 511 2048000000 -o noatime,allow_other -o remember=0 18451 be/4 root 58.43 M 1666.26 M 0.00 % 0.03 % shfs /mnt/user -disks 511 2048000000 -o noatime,allow_other -o remember=0 18505 be/4 root 60.94 M 1611.01 M 0.00 % 0.03 % shfs /mnt/user -disks 511 2048000000 -o noatime,allow_other -o remember=0 6192 be/4 root 140.00 K 1516.05 M 0.00 % 0.11 % [btrfs-transacti] [...] Everything after last line is less then 200M writes. I've new SSD disk with 400 TBW and 5y warranty
  4. I'm wonder.. it's a bug at unRaid software or Linux, and if we have any update date when it start works as worked at 6.7.2? I bought hardware firewall because this bug long time ago, and I would like back with pfSense at unRaid, but I need work stable as before. So with what is problem?
  5. Form time to time I get into about update: Docker - pyload [286a..482a]: 2019-07-16 00:10 Notice [VDS] - Docker update 286a..482a A new version of pyload is available If no new version of pyload what change every time when I get notice like above?