WarHawk8080

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by WarHawk8080

  1. I have been using rsync to backup my audiobooks from my UnRaid box to a 2nd smaller TrueNAS Scale box...was thinking about this...saw the validate command on luckybackup to show Would something like this as a cron? rsync -aqu --protect-args -chown=warhawk:warhawk /mnt/user/appdata [email protected]:/mnt/optimus/NAS/WarHawk/MOBIUS.DOCKER.BACKUP The -a is "archive mode" which is the command "equivalent" to -rlptgoD The -q quiet (so it doesn't output anything)...just does it The -u updates (allows rsync to skip files that are still new in the destination directory) don't know what the --protect-args is...is that the same as -p or preserve permissions? The -chown just makes sure it has the same permissions on my target drive as the "user:group" I am backing up to (otherwise my user has a directory with a bunch of "nobody:users" files/directories in them. Of course I ran the first command like this the first time to watch all the awesome text go by to ensure it was actually going to the correct location. rsync -avhP /mnt/user/appdata [email protected]:/mnt/optimus/NAS/WarHawk/MOBIUS.DOCKER.BACKUP Since I followed the howto and installed the /root/.ssh/id_rsa it allows passwordless SSH login and instant upload/backup to my 2nd box
  2. Not a howto...but a "call to action" by the DEV's to see if there is some sort of way to impliment easier/automated filesystem maintenance (if there isn't some already there)
  3. I did a search...not much on BTRFS filesystem tools...particularly defragmentation and the like I found a few links https://lexming.github.io/posts/simple-btrfs-scrub/ This is a bunch of scripts for "automatic" filesystem tools that might be used to automate things to help with BTRFS arrays https://github.com/kdave/btrfsmaintenance I ran this command on my system...let's hope I didn't bork something btrfs filesystem defragment sync -r f /mnt/disk* &
  4. Found this...not sure if the DEVs could poke around with this and see if there is any way to implement scripting or not for btrfsmaintenance (not even sure if it's automatic or not already built in) https://github.com/kdave/btrfsmaintenance
  5. Ok...so who do I use to download then, yeah..nzbgeek...not nzbget...doh Sorry my brain is on fire from trying over and over and over to get Aaaand...I figured out...I use newsgroup.ninja as my downloader...which I HAVE NOT BEEN USING! I am an asstard...pay no attention to the dumb whiteguy that is drooling on his keyboard at the moment
  6. ok...either the newsgroups I use (drunkenslug and nzbget) are down...or I can't connect to them thru port 563...either behind my OpenWRT router or thru ExpressVPN and OpenVPN connect...I know I am connecting (per the logs saying it's connecting to ExpressVPN Dallas servers) but cannot get it SABnzbd to authenticate to the servers when I add the server to download [Errno 111] timed out I am using port 563 and SSL in the server option..but it will just not connect This is killing me...I have been beating my head on a rock for a week and still can't make headway...
  7. I get this when I try to pull the SSL openssl s_client -showcerts -connect api.nzbget.info:563 23425284581184:error:2008F002:BIO routines:BIO_lookup_ex:system lib:crypto/bio/b_addr.c:730:Name or service not known connect:errno=22 I do get a good SSL cert when I go to google.com:443 ARRRGH
  8. It's weird...my Sonarr and Radarr can hit the servers both nzbget and drunkenslug just fine...the "test" button shows a check...difference is on the webgui it shows https://XXXXXXXXXX where as nzbget and sabnzbd won't let you set it (even though you can set the port and set SSL support) My network/DNS can hit those news servers just fine...just the downloader/s are busted
  9. Just did that.... Connection to api.nzbgeek.info failed: Error 99 - Cannot assign requested address
  10. Boom...done...missed that! Thanks!!!!!
  11. Is there any way to get the appdata backup program to use pigz, so that way it can use multiple cpu's to compress the backup image, instead of a single cpu, it takes 20 minutes to backup some apps (like PLEX and audiobookshelf) I know I have an "older" I7-7700 processor...but it shouldn't take it that long in most cases https://stackoverflow.com/questions/12313242/utilizing-multi-core-for-targzip-bzip-compression-decompression https://zlib.net/pigz/
  12. Just deleted the container and the directory...reinstalled...uggh
  13. Not sure if I set a password or not...but can't make changes, keep getting the These credentials do not match our records These credentials do not match our records Is there a way to remove/reset/change that password...all I have in /mnt/user/appdata/heimdall is root@MOBIUS:/mnt/user/appdata/heimdall# ll total 4.0K drwxr-xr-x 1 nobody users 83 Feb 3 06:42 ./ drwxrwxrwx 1 nobody users 236 Feb 15 20:05 ../ -rw-r--r-- 1 nobody users 48 Feb 3 06:42 .migrations drwxr-xr-x 1 nobody users 38 Feb 3 06:42 keys/ drwxr-xr-x 1 nobody users 70 Feb 19 02:00 log/ drwxrwxr-x 1 nobody users 193 Feb 19 18:31 nginx/ drwxr-xr-x 1 nobody users 44 Feb 3 06:42 php/ drwxr-xr-x 1 nobody users 162 Feb 19 18:31 www/
  14. I just setup syncthing, I set the /config directory to /mnt/user/syncthing not appdata, logged into webgui, added my cellphone, shared my photos, music, and DCIM with the syncthing on UnRaid, and it started syncing, can log into shell and see the directories in the /mnt/user/syncthing directory (along with all the configs and keys)...if it's in appdata it fills up the docker file So I guess it works if you tell it to use a static directory, the data1 and data2 environment setting do nothing root@MOBIUS:/mnt/user/syncthing# ls -lah total 64K drwx------ 1 nobody users 248 Oct 11 14:44 ./ drwxrwxrwx+ 1 nobody users 148 Oct 11 14:39 ../ drwxr-xr-x 1 nobody users 268 Oct 11 14:41 Camera/ drwxr-xr-x 1 nobody users 114 Oct 11 14:41 Dcim/ drwxr-xr-x 1 nobody users 816 Oct 11 14:41 Music/ drwxr-xr-x 1 nobody users 0 Oct 11 14:44 Sync/ -rw-r--r-- 1 nobody users 794 Oct 11 14:39 cert.pem -rw------- 1 nobody users 13K Oct 11 14:44 config.xml -rw------- 1 nobody users 7.8K Oct 11 14:39 config.xml.v0 -rw------- 1 nobody users 33 Oct 11 14:39 csrftokens.txt drwxr-xr-x 1 nobody users 0 Oct 11 14:42 data1/ drwxr-xr-x 1 nobody users 0 Oct 11 14:42 data2/ -rw-r--r-- 1 nobody users 806 Oct 11 14:39 https-cert.pem -rw------- 1 nobody users 288 Oct 11 14:39 https-key.pem drwxr-xr-x 1 nobody users 140 Oct 11 14:42 index-v0.14.0.db/ -rw------- 1 nobody users 288 Oct 11 14:39 key.pem
  15. Was able to easily install and setup Nvidia for my Tdarr docker container by adding the Nvidia variable devices variable into the container...works great! I also have an intel CPU...I was poking around the net and found this https://www.intel.com/content/www/us/en/develop/documentation/get-started-with-intel-oneapi-render-linux/top/configure-your-system.html Which shows how to build the intel "rendering toolkit" I am ok at linux, but have NO clue how to work with slackware Was wondering if one of you guru's out there with a test/development box could have a go with this, possibly build the supporting drivers and make a docker package with this then maybe a variable for using intel variable devices might work? Just an idea
  16. The beautiful thing about zswap is that is can compress unused portions swap capability should ALWAYS be used for a linux type system...even if it is a compressed virtual version of itself https://www.kernel.org/doc/html/v4.18/vm/zswap.html https://haydenjames.io/linux-performance-almost-always-add-swap-space/ I really think it should be a thing that UnRAID utilizes, or at the very least give us the option to use it https://www.linuxquestions.org/questions/slackware-14/requests-for-current-14-2-15-0-a-4175620463/page140.html#post5902241 Odd thing is...zramctl is included in the latest version of UnRAID (as of this posting Version: 6.8.3) root@MOBIUS:/mnt/cache# zramctl --help Usage: zramctl [options] <device> zramctl -r <device> [...] zramctl [options] -f | <device> -s <size> Set up and control zram devices. Options: -a, --algorithm lzo|lz4|lz4hc|deflate|842 compression algorithm to use -b, --bytes print sizes in bytes rather than in human readable format -f, --find find a free device -n, --noheadings don't print headings -o, --output <list> columns to use for status output --output-all output all columns --raw use raw status output format -r, --reset reset all specified devices -s, --size <size> device size -t, --streams <number> number of compression streams -h, --help display this help -V, --version display version Available output columns: NAME zram device name DISKSIZE limit on the uncompressed amount of data DATA uncompressed size of stored data COMPR compressed size of stored data ALGORITHM the selected compression algorithm STREAMS number of concurrent compress operations ZERO-PAGES empty pages with no allocated memory TOTAL all memory including allocator fragmentation and metadata overhead MEM-LIMIT memory limit used to store compressed data MEM-USED memory zram have been consumed to store compressed data MIGRATED number of objects migrated by compaction MOUNTPOINT where the device is mounted For more details see zramctl(8). But the actual binary of zram is not included (possibly stripped out by the developers?) root@MOBIUS:/mnt/cache# modprobe zram num_devices=4 modprobe: FATAL: Module zram not found in directory /lib/modules/4.19.107-Unraid
  17. Does unRaid support zram and zswap? I have much dealings with some SBC's and armbian...even with minimal RAM with the zram and zswap modules it can easily compress swap space in a compressed zram space with minimal dwell/lag in compressing/uncompressing swap, even though it uses physical ram the compressed space can easily double or triple the RAM that being used. I have a Orange Pi R1 with 512MB ram, 256MB is used for zswap and 200% of that is used as compressed swap adding 256MB over what is actually on the board and I also kick my swapiness up to 100 and it works like a champ But with the low prices of DDR4, I might as well go ahead and upgrade to 16GB ram from 8GB Oh and I also have the btrfs issue with creating a swapfile...fooey! Yeah..I know it's ubuntu...but slackware should be able to do this MUCH better than debian/ubuntu based systems https://askubuntu.com/questions/471912/zram-vs-zswap-vs-zcache-ultimate-guide-when-to-use-which-one#472227 Better one about slackware https://www.linuxquestions.org/questions/slackware-14/requests-for-current-14-2-15-0-a-4175620463/page141.html Linux should ALWAYS have a swap capability... https://haydenjames.io/linux-performance-almost-always-add-swap-space/ Even if it is in a "virtual" compressed RAM swap space
  18. WarHawk8080

    Howdy

    It is still coming from the native nginx server built in right...not a server inside the docker? Oh and thanks for the reply
  19. Howdy howdy howdy

     

    New here...can't wait to check it all out!

     

     

  20. WarHawk8080

    Howdy

    Hello, New to unraid...a tinkerer of linux for a while and came here from OMV, I installed the ZFS package and used that...ZFS is a VERY solid array format...but heard that Oracle and Linux were in a kerfufle over licensing and whatnot...bleh Came here...was poking around...have some questions A. Do all the containers "webpage" serves host thru the default nginx webserver that hosts the unraid dashboard? If so...would the default nginx server config benefit from a massive performance tweak Such as chaning /etc/nginx/nginx.conf from worker_processes 1; to worker_processes auto; worker_cpu_affinity auto; and upping the worker_connections 1536; to worker_connections 65535; Which would allow MUCH more thruput of requests from the docker containers to the nginx server? if not...I will change mine back...but I can DEFINITELY see much more pep in my nextcloud docker after I changed that and restarted nginx by killing it's service with ps -A I even bit the bullet and bought a full PRO license Peace