Jump to content

BRiT

Members
  • Posts

    6,575
  • Joined

  • Days Won

    8

Everything posted by BRiT

  1. The processes gone wild are kicked off from user "proxmox", so something they're doing is triggering badness and using nearly 58% of your memory. First one was started on December 13th, next one was started on December 22nd. Their virtual memory size combined is 13,451,824‬ bytes or 12.8 Gig. proxmox 9770 0.3 12.9 4188792 2063548 ? S Dec13 62:28 \_ /usr/sbin/smbd -D 9770 proxmox 20 0 4188792 2.0g 20232 S 0.0 13.0 62:28.90 smbd COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME smbd 9770 proxmox 35u IPv4 1645887 0t0 TCP 192.168.2.6:445->192.168.2.28:47650 (ESTABLISHED) proxmox 21062 0.6 44.8 9263032 7128992 ? S Dec22 26:06 \_ /usr/sbin/smbd -D 21062 proxmox 20 0 9263032 6.8g 19660 S 0.0 44.8 26:06.34 smbd COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME smbd 21062 proxmox 36u IPv4 74503557 0t0 TCP 192.168.2.6:445->192.168.2.19:53712 (ESTABLISHED)
  2. Maybe face the unfortunate reality that you might need a new system to suit your new needs.
  3. To be pedantic, it can not recover at the file level, it is at the device level that just happens to contain files. Unraid and Parity is not able to recover previous versions of files. One still needs to plan for backups accordingly.
  4. Hint: Here's an entire thread for it ...
  5. Processes using a lot of Memory at lease from the Virtual Memory viewpoint, this is at least 50 Gig between 3 processes where some of them are expected like your VM @ 25.7% (qemu) and your Database (influxdb for Glances), but likely not expected is your MariaDB (Java for Tomcat for MariaDB). You might want figure out what exactly that Java process for MariaDB is doing as part of it's /etc/firstrun/tomcat-wrapper.sh process. Or try applying memory limits to your dockers. 15086 root 20 0 21.7g 881148 7036 S 0.0 1.3 0:29.58 java root 28845 0.0 0.0 109100 10184 ? Sl Dec20 1:02 | \_ containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/32058f818192f568d243e2881ebc339f68ef8a750b9d75bb74ef037e35e7226f -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc root 28871 0.0 0.0 4184 84 ? Ss Dec20 0:02 | | \_ /bin/tini -- /usr/bin/supervisord -n -c /etc/supervisor/conf.d/supervisord-mariadb.conf root 29149 0.0 0.0 57720 13176 ? S Dec20 0:20 | | \_ /usr/bin/python /usr/bin/supervisord -n -c /etc/supervisor/conf.d/supervisord-mariadb.conf root 15073 0.0 0.0 19876 788 ? S 06:43 0:00 | | \_ /bin/bash /etc/firstrun/tomcat-wrapper.sh root 15086 0.3 1.3 22734356 881148 ? Sl 06:43 0:29 | | | \_ /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Djava.util.logging.config.file=/var/lib/tomcat8/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -classpath /usr/share/tomcat8/bin/bootstrap.jar:/usr/share/tomcat8/bin/tomcat-juli.jar -Dcatalina.base=/var/lib/tomcat8 -Dcatalina.home=/usr/share/tomcat8 -Djava.io.tmpdir=/var/lib/tomcat8/temp org.apache.catalina.startup.Bootstrap start nobody 32613 0.0 0.0 79032 9776 ? S Dec20 0:00 | | \_ /usr/local/guacamole/sbin/guacd -b 0.0.0.0 -L info -f root 32617 0.0 0.0 19996 572 ? S Dec20 0:00 | | \_ /bin/bash /usr/bin/mysqld_safe --skip-syslog nobody 1023 0.0 0.1 1898776 75968 ? Sl Dec20 2:02 | | \_ /usr/sbin/mysqld --basedir=/usr --datadir=/config/databases --plugin-dir=/usr/lib/mysql/plugin --user=nobody --skip-log-error --log-error=/config/databases/32058f818192.err --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 12822 root 20 0 16.8g 16.2g 19368 S 31.2 25.8 3344:45 qemu-syst+ root 12822 114 25.7 17575796 16964632 ? SLl Dec20 3344:45 /usr/bin/qemu-system-x86_64 -name guest=archlabs 20886 root 20 0 11.0g 582796 20864 S 6.2 0.9 13:07.74 influxd root 20886 1.1 0.8 11574620 582796 ? Ssl Dec21 13:07 | | \_ influxd Additional processes using more scraps of memory 8868 root 20 0 1611544 111248 5876 S 0.0 0.2 16:25.43 python3 root 8868 0.5 0.1 1611544 111248 ? S Dec20 16:25 | | \_ python3 /app/intelligence.py 8871 root 20 0 5738412 1.3g 13212 S 0.0 2.1 40:42.89 python3 root 8871 1.4 2.1 5738412 1408524 ? Sl Dec20 40:42 | | \_ python3 /app/intelligence.py 8873 root 20 0 3261176 958952 23048 S 0.0 1.5 333:31.89 python3 root 8873 11.5 1.4 3261176 958952 ? Sl Dec20 333:31 | | \_ python3 /app/intelligence.py 12042 nobody 20 0 8223412 553348 21308 S 0.0 0.8 3:00.97 java nobody 12042 0.5 0.8 8223412 553348 ? Ssl 00:00 3:00 | | \_ java -Xmx1024M -jar /usr/lib/unifi/lib/ace.jar start
  6. Spending $2500 is easy -- Spread it around between LimeTech and the Community Developers. 😜 As to hardware upgrades, the best performance for the money seems to be on the AMD CPU side.
  7. Idea: Post your full Diagnostics.zip file in your next post after you encounter the issue and before you reboot.
  8. Negative. You can, but you need to put "bash " in front of them. For instance: bash /boot/myscripts/my_custom_script.sh .
  9. Everyone posting in here running 6.7.2 should upgrade to 6.8 Stable to at least remove the chance of your slowdown being from the "writes starves reads" bug.
  10. Not really needed, just follow written posts. Not explicitly for this setup but very similar.
  11. Yeah, it certainly shouldn't impact things one bit, so some sort of feedback-loop bug? It doesn't seem like just a bug in reporting unless it also impacts the SSD's SMART attribute Total_LBAs_Written too. Maybe this thread, in particular this post:
  12. That's not what others found with BTRFS and Encryption. They say as soon as they switched to XFS and Encryption their writes were normal, while BTRFS and Encryption their writes were insane.
  13. You really need to stop your cat from sabotaging your server. That or you have some sort of electrical fault issue with your power, power supply, motherboard, and/or usb ports. I'm still using my original USB stick from a decade ago.
  14. Most likely Crashplan or Filebot is the culprit. You need to limit the memory that pos can see in its Docker. Someone else posted how to do that, but I'm on mobile so can't find and post link for you. Here is a python process it victimized at the time your mem6was all used up: Dec 18 14:32:44 UnRaider kernel: [ 13711] 99 13711 9653899 9000662 77455360 0 0 python Dec 18 14:32:44 UnRaider kernel: [ 17648] 99 17648 2554308 406096 4460544 0 0 CrashPlanServic Here's top memory use after OOM Killer: 18886 nobody 20 0 20.2g 157668 1828 S 0.0 0.2 2:40.57 java 10509 root 20 0 9445528 8.2g 21656 S 81.2 13.0 1000:54 qemu-syst+ 10433 root 20 0 13.2g 12.7g 21704 S 50.0 20.2 884:48.24 qemu-syst+ 17648 nobody 20 0 9.8g 1.0g 3612 S 0.0 1.7 97:16.19 CrashPlan+ root 17294 0.0 0.0 10736 5532 ? Sl 03:51 0:01 | \_ containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/18fc1f76d87ba3ee433bbcef53435107635431cb04be2024c5405982ef42a781 -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/containerd -runtime-root /var/run/docker/runtime-runc root 17323 0.0 0.0 31652 4736 ? Ss 03:51 0:00 | | \_ /usr/bin/python3 -u /sbin/my_init root 18801 0.0 0.0 196 12 ? S 03:51 0:00 | | \_ /usr/bin/runsvdir -P /etc/service root 18813 0.0 0.0 176 4 ? Ss 03:51 0:00 | | \_ runsv cron root 18829 0.0 0.0 26752 848 ? S 03:51 0:00 | | | \_ /usr/sbin/cron -f root 18814 0.0 0.0 176 20 ? Ss 03:51 0:00 | | \_ runsv syslog-forwarder root 18839 0.0 0.0 7480 80 ? S 03:51 0:00 | | | \_ tail -f -n 0 /var/log/syslog root 18815 0.0 0.0 176 4 ? Ss 03:51 0:00 | | \_ runsv syslog-ng root 18842 0.0 0.0 65844 2932 ? S 03:51 0:00 | | | \_ syslog-ng -F -p /var/run/syslog-ng.pid --no-caps root 18816 0.0 0.0 176 20 ? Ss 03:51 0:00 | | \_ runsv X11rdp nobody 18838 0.0 0.0 70516 9496 ? S 03:51 0:00 | | | \_ X11rdp :1 -bs -ac -nolisten tcp -geometry 1280x720 -depth 16 -uds root 18817 0.0 0.0 176 24 ? Ss 03:51 0:00 | | \_ runsv guacd root 18833 0.0 0.0 78160 9956 ? S 03:51 0:00 | | | \_ /usr/local/sbin/guacd -f root 18818 0.0 0.0 176 20 ? Ss 03:51 0:00 | | \_ runsv openbox nobody 18835 0.0 0.0 147836 2516 ? S 03:51 0:00 | | | \_ /usr/bin/openbox --startup /usr/lib/x86_64-linux-gnu/openbox-autostart OPENBOX root 18819 0.0 0.0 176 4 ? Ss 03:51 0:00 | | \_ runsv tomcat7 root 18837 0.1 0.3 2789840 235192 ? Sl 03:51 1:14 | | | \_ /usr/lib/jvm/java-7-openjdk-amd64/bin/java -Djava.util.logging.config.file=/var/lib/tomcat7/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.awt.headless=true -Xmx128m -XX:+UseConcMarkSweepGC -Djava.endorsed.dirs=/usr/share/tomcat7/endorsed -classpath /usr/share/tomcat7/bin/bootstrap.jar:/usr/share/tomcat7/bin/tomcat-juli.jar -Dcatalina.base=/var/lib/tomcat7 -Dcatalina.home=/usr/share/tomcat7 -Djava.io.tmpdir=/tmp/tomcat7-tomcat7-tmp org.apache.catalina.startup.Bootstrap start root 18820 0.0 0.0 176 36 ? Ss 03:51 0:00 | | \_ runsv xrdp root 18828 0.0 0.0 22604 484 ? S 03:51 0:00 | | | \_ /usr/sbin/xrdp --nodaemon root 18821 0.0 0.0 176 16 ? Ss 03:51 0:00 | | \_ runsv xrdp-sesman root 18824 0.0 0.0 35052 432 ? S 03:51 0:00 | | | \_ /usr/sbin/xrdp-sesman --nodaemon root 18822 0.0 0.0 176 20 ? Ss 03:51 0:00 | | \_ runsv filebot root 18836 0.0 0.0 21096 236 ? S 03:51 0:00 | | | \_ /bin/bash ./run root 24529 0.0 0.0 259060 10804 ? Sl 03:51 0:37 | | | \_ /usr/bin/python3 /files/monitor.py /files/FileBot.conf root 18823 0.0 0.0 176 16 ? Ss 03:51 0:00 | | \_ runsv filebot-ui root 18831 0.0 0.0 21104 240 ? S 03:51 0:00 | | \_ /bin/bash ./run nobody 18849 0.0 0.0 4448 84 ? S 03:51 0:00 | | \_ /bin/sh /usr/bin/filebot nobody 18886 0.2 0.2 21211968 157668 ? Sl 03:51 2:40 | | \_ java -Dsun.java2d.xrender=false -Dunixfs=false -DuseGVFS=true -DuseExtendedFileAttributes=true -DuseCreationDate=false -Djava.net.useSystemProxies=true -Djna.nosys=false -Djna.nounpack=true -Dapplication.deployment=deb -Dnet.filebot.gio.GVFS=/gvfs -Dapplication.dir=/nobody/.filebot -Djava.io.tmpdir=/nobody/.filebot/temp -Dnet.filebot.AcoustID.fpcalc=/usr/share/filebot/fpcalc -jar /usr/share/filebot/FileBot.jar
  15. @golli53 are you running Dynamix Cache Dirs at all? If not, can you try running it and then trying tests again?
  16. Parity 1 >= max size of Data Drive(s) Parity 2 >= max size of Data Drive(s) Parity drive(s) must be greater than or equal to the max size of any data drive. In @Goodboy428 situation, you can simply use a 4TB or larger for Parity since your max data drive is 4TB.
  17. Run it in it's own docker with it's own nginx instance on it's own ip or port.
  18. For more details on Parity methods, here's a good starting point to read:
  19. Writes to the array are only limited by the slowest of the drives involved, with typical setting for write method of read/modify/write then only the single Data drive being written to and the Parity drives are involved. If you have write method set as reconstruct writes, then all data drives are involved. This is also called Turbo Writes by some, since it can be much faster since it doesnt involve having to do a read-and-write in the data drive, just a write while the other drives do a read and the parity drives do a modify-write Parity can be added later. Same with a second Parity drive. The first time a Parity Drive is added, is when parity is calculated, that is the "plarity build" and it involves all drives. Then when you are validating the parity information it is called "parity check" and it involves all drives. If you ever have to rebuild a drive, that is "parity rebuild" or "drive rebuild" and it involves all other drives to recalculate what the replacement drive should contain.
  20. Unraid only has the limitations of the Parity Drive(s) must be as large or larger than any individual data drive. Parity checks and parity rebuild speeds are limited by the slowest drive(s) involved. Other thing to note is Parity is NOT Backup. Parity is protection from drive loss, but is not backups. If you delete a file on a data drive, the parity drive(s) will be updated in real time so you can not restore the file.
  21. I've always seen new files created as the user logged into the samba share and group "users", for instance user "wmce", as far back as I can remember. My shares are set as either secure or private.
  22. You might have hardware issues or absolutely the worst of luck if you have drive failures every 2 years. Or maybe you have a cat that is sabotaging your servers.
  23. Have you tried running actual unraid and not the Unraid-NVIDIA version?
  24. Finally stumbled across this bug report (previous web searches didn't include this thread). It happened to me, only have a desktop Win10 system running "Media Center Master" that scans media for new metadata. I just set the WSD Options to be "-i br0". I will report back if I run across the issue again.
×
×
  • Create New...