fluisterben

Members
  • Posts

    111
  • Joined

  • Last visited

Everything posted by fluisterben

  1. I just installed rsync using some nerd-pack. Way easier.. Entire docker engines plus its dependencies for just rsync is way overkill. Plus you have to work around the fact that the actual server is not the actual server, but behind a bridge/vnode with translated ports etc.
  2. How are you starting/persistently running autossh in unraid? I have it running in debian as a service under systemd, but no idea how this is done for unraid's slackware version.. [Unit] Description=AutoSSH tunnel After=network.target network-online.target sshd.service [Service] Environment="AUTOSSH_GATETIME=0" RestartSec=40 Restart=always ExecStart=/usr/bin/autossh -M 0 -o ExitOnForwardFailure=yes -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3" -p 65022 -N -R 65422:127.0.0.1:65422 root@remote-server-ip TimeoutStopSec=20 [Install] WantedBy=multi-user.target
  3. OK, never mind. It was custom go config. Sorry about that.
  4. Well, could have fooled me, because it says;
  5. I don't get it, you ignore bash history, and then want to have nice history?
  6. This is still not working. I have just updated to 6.10.3 and the ssl is not loaded in nginx. Here's what I run after booting; #!/bin/bash cp -af /mnt/user/nxt/live/somename.org/fullchain.pem /boot/config/ssl/certs/somename_unraid_bundle.pem cat /mnt/user/nxt/live/somename.org/privkey.pem >> /boot/config/ssl/certs/somename_unraid_bundle.pem /etc/rc.d/rc.nginx reload /etc/rc.d/rc.php-fpm reload This runs without errors, it starts nginx, but after this (and after rebooting the entire server as well), it still does not load the new cert. Honestly, this entire concept is broken in your config, as far as I would say. People create their own certs now, and there's no way I can properly work that into unraid's OS setup. Please allow us to have it use a custom location for the cert(s), and NOT recreate one every time we update the OS. Simply allow that to skip all your ssl coding, and let the user put in a replacement for the /etc/nginx/conf.d/servers.conf ssl path in the UI and you'd be done with this time-wasting support on ssl certs..
  7. This tmpfs /var/log tmpfs rw,size=128m,mode=0755 0 0 is just insanely small for a server my size. It's constantly filling up to 100%. I have loads of free RAM available. I would like to set this to 1024m or thereabouts. TIA!
  8. What does the SYSNICS="1" mean or do in the /boot/config/network.cfg file? My unraid server uses LACP bond mode 4 with two physical NICs, apparently it works, but I'd like to finetune where possible. Shouldn't it be SYSNICS="2" then? I use bonding on a remote debian server with this config: auto lo iface lo inet loopback auto bond0 iface bond0 inet manual bond-slaves enp1s0 enp2s0 bond-mode 802.3ad bond-miimon 100 bond-updelay 200 bond-downdelay 200 bond-lacp-rate 0 bond-xmit_hash_policy layer3+4 auto br0 iface br0 inet static address 192.168.1.9 netmask 255.255.255.0 gateway 192.168.1.1 dns-nameservers 127.0.0.1 bridge_ports bond0 bridge_waitport 0 bridge_fd 0 And I would like to have my unraid server use the same settings for its bond mode 4. Currently it's: # Generated settings: IFNAME[0]="br0" BONDNAME[0]="bond0" BONDING_MIIMON[0]="100" BRNAME[0]="br0" BRSTP[0]="no" BRFD[0]="0" BONDING_MODE[0]="4" BONDNICS[0]="eth0 eth1" BRNICS[0]="bond0" PROTOCOL[0]="ipv4" USE_DHCP[0]="no" IPADDR[0]="192.168.1.11" NETMASK[0]="255.255.255.0" GATEWAY[0]="192.168.1.1" DNS_SERVER1="192.168.1.9" DNS_SERVER2="1.0.0.2" USE_DHCP6[0]="no" SYSNICS="1" The bonding seems to work: # ifconfig bond0: flags=5443<UP,BROADCAST,RUNNING,PROMISC,MASTER,MULTICAST> mtu 1500 ether d0:50:99:d1:8f:47 txqueuelen 1000 (Ethernet) RX packets 370606 bytes 60409014 (57.6 MiB) RX errors 0 dropped 191 overruns 0 frame 0 TX packets 2881873 bytes 4333925543 (4.0 GiB) TX errors 0 dropped 170 overruns 0 carrier 0 collisions 0 br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.11 netmask 255.255.255.0 broadcast 0.0.0.0 ether d0:50:99:d1:8f:47 txqueuelen 1000 (Ethernet) RX packets 338185 bytes 15248343 (14.5 MiB) RX errors 0 dropped 53 overruns 0 frame 0 TX packets 86837 bytes 4182443853 (3.8 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:3d:e5:d5:ce txqueuelen 0 (Ethernet) RX packets 650 bytes 75489 (73.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 497 bytes 124616 (121.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether d0:50:99:d1:8f:47 txqueuelen 1000 (Ethernet) RX packets 351441 bytes 57287347 (54.6 MiB) RX errors 0 dropped 4 overruns 0 frame 0 TX packets 53320 bytes 69543800 (66.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0x91300000-9137ffff eth1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether d0:50:99:d1:8f:47 txqueuelen 1000 (Ethernet) RX packets 14203 bytes 2002583 (1.9 MiB) RX errors 0 dropped 3 overruns 0 frame 0 TX packets 2824329 bytes 4262089349 (3.9 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0x91400000-9147ffff lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 5309 bytes 378075 (369.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 5309 bytes 378075 (369.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vethadea463: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 76:af:3b:7e:14:8d txqueuelen 0 (Ethernet) RX packets 650 bytes 84589 (82.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 497 bytes 124616 (121.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether fe:54:00:57:cb:d9 txqueuelen 1000 (Ethernet) RX packets 6067 bytes 738073 (720.7 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 12127 bytes 36184105 (34.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 So that looks OK now, but is there a way to set bond-lacp-rate 0 bond-xmit_hash_policy layer3+4 in unraid network config?
  9. Apparently this messes up 802.3ad / LACP bond mode 4, when you have that in use. I have 2 NICs on my unraid server that I have bonded to link aggregate.
  10. This is still an issue. My disks keep spinning up almost immediately after spinning them down, which, barely any even do when asked. This used to not be the case, with the exact same hardware and software config. Something is waking them up, and it costs us a ridiculous amount of power. In fact, the reason I noticed this issue, is because we wanted to know what changed in our power consumption. Turned out it was the unraid server.
  11. I agree, unbelievably complex. This procedure should at the very least be scripted/automatable, I think. It is in most other NAS systems, like Drobo, Synology, OMV, etc.
  12. ~# powertop --auto-tune modprobe cpufreq_stats failedLoaded 0 prior measurements RAPL device for cpu 0 RAPL device for cpu 0 Devfreq not enabled glob returned GLOB_ABORTED the port is sda the port is sdb the port is sdc the port is sdd the port is sde the port is sdf the port is sdg the port is sdh the port is sdi the port is sdj the port is sdk the port is sdl the port is sdm the port is sdn the port is sdo Leaving PowerTOP OK, I'm new to this. What do those failed and aborted mentions mean? I suppose I need to take my old monitor to the crawlspace and change BIOS settings for this machine, no?
  13. What exactly do you mean by this? You have posted 5 commandlines. Rather confusing. The sdparm command does not state anything about drives being spun down or not, or at least I don't see it..
  14. The winbindd error seems to be caused by Samba/smb, since I have at most 4 machines connecting using winbind, not over 200. https://bugzilla.samba.org/show_bug.cgi?id=3204
  15. Apr 2 13:19:03 unraid9 atd[6027]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6055]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6053]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6057]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6056]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6059]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6061]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6060]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6065]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 winbindd[9470]: [2021/04/02 13:19:03.085234, 0] ../../source3/winbindd/winbindd.c:1255(winbindd_listen_fde_handler) Apr 2 13:19:03 unraid9 winbindd[9470]: winbindd: Exceeding 200 client connections, no idle connection found Apr 2 13:19:03 unraid9 atd[6066]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6282]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6285]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 winbindd[9470]: [2021/04/02 13:19:03.085493, 0] ../../source3/winbindd/winbindd.c:1255(winbindd_listen_fde_handler) Apr 2 13:19:03 unraid9 winbindd[9470]: winbindd: Exceeding 200 client connections, no idle connection found Apr 2 13:19:03 unraid9 atd[6063]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6067]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6070]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6280]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6058]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6289]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6072]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6071]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6074]: Userid 0 not found - aborting job 3 (a00003019b50c7) Apr 2 13:19:03 unraid9 atd[6073]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6075]: Userid 0 not found - aborting job 4 (a00004019b50bd) Apr 2 13:19:03 unraid9 atd[6076]: Userid 0 not found - aborting job 3 (a00003019b50c7) OK, my /var/log/syslog gets filled up in minutes with these errors. I have no idea where they come from. I've already researched at as the source, and apparently there are many files spooled from that, but then the winbindd error is a mystery to me, and seems directly related. Anyone seen this before? It started after updating unraid to the latest stable version..
  16. How does one 're-enable the disk' other than what TS wrote? I also followed the guide to re-enable the disc with stopping the array, removing the disc from the array, starting array, stopping array, enabling the disc for the array and making it parity-sync rebuild. You seem to know of a miracle other kind of 're-enable' option that is not documented anywhere.. For most that will be too late then, since they already rebuilt it.
  17. They are just a 'Share', so, in my case /mnt/user/nxt, which does not show any settings for permissions relating to access from a VM. It shows the Export to Yes, 'Secure' for SMB, which this isn't, so: Should I change permissions from shell then? I have really no idea about the inner workings of that passthrough share by unraid, it's not documented anywhere.
  18. OK, pulling this thread back up, because I have issues again; I run a debian VM with (among others) nextcloud and nginx on it. It has this part in its xml; <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/user/nxt'/> <target dir='nxt'/> <alias name='fs0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </filesystem> Now, the /nxt is working, from within the VM, but the permissions under it seem to be problematic. Even though I can set them just fine from within the VM, under /nxt, nextcloud still complains about "Home storage for user x not being writable" and it has issues within nextcloud sharing stuff. I've tried to find out why, but logs seem to be unclear as to why. Any ideas?
  19. There are no names to resolve when you proxy outside of a docker container using nginx. Not much of what I read here makes any sense; The name resolving is done outside of the container here, for nginx, with dyndns. NGINX listens to that name and serves its vhost, then proxies to/from the docker container on a specified port, which isn't 80 or 443, because those are already used in the network. Port numbers are not being 'resolved' by DNS.
  20. You wrote; "You should use 80 instead of the port you mapped to the container as it uses dockers internal network to resolve names." which is just incorrect. First, docker does not by definition 'uses an internal network'. You set it to do so. Second, names are resolved using DNS, which can point anywhere, regardless of where in a network you are.
  21. Name resolving has nothing to do with either network or ports of a docker container.
  22. You need to run a docker main config like this; network create macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=br0 mymcvlannet and then, something like this in the run line; docker run --net mymcvlannet --ip 192.168.1.111 That way your container should have its served ports blank on that .111 LAN IP, so you don't have to run strange proxy-setups. Only problem is, I have no idea where to put this in unRAID GUI.
  23. That will not work, because the docker is still part of the 0.0.0.0 network of unraid. There's no new IP for that docker instance. I'd prefer it if it was that way, but none of the dockers for unRAID do this.
  24. So, basically you're saying; Remove the drive, put a replacement in, let it do a parity rebuild. Done. If that is the procedure, why isn't unraid just telling me while it happens? The way things are portrayed, I'm not sure if data in the array is in tact or complete when I just kick that drive out. Here's my advice to unRAID dev; I get warnings that a drive is getting bad, more failures, more SMART errors, slowly deteriorating. I want to replace it. First thing a user wants to do is have unraid READ from that dying disk what is still in tact (and readable), move it out, and then discard blocks. Or at the very least be really assured the in-tact versions of what may be going bitrot on that drive exist somewhere outside of that drive so user is not losing data.
  25. OK, ssds added to the cache pool and ran ~# btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt/cache -v Dumping filters: flags 0x7, state 0x0, force is off DATA (flags 0x100): converting, target=64, soft is off METADATA (flags 0x100): converting, target=64, soft is off SYSTEM (flags 0x100): converting, target=64, soft is off which I'll have to wait and see if it works, but it looks good thus far. ~# btrfs fi show Label: none uuid: f18f37c9-5244-4567-b88f-0bdcaa32e693 Total devices 7 FS bytes used 937.73GiB devid 2 size 894.25GiB used 893.54GiB path /dev/nvme0n1p1 devid 3 size 894.25GiB used 894.25GiB path /dev/sdp1 devid 4 size 894.25GiB used 894.25GiB path /dev/sdn1 devid 6 size 953.87GiB used 781.50MiB path /dev/sdj1 devid 7 size 953.87GiB used 781.50MiB path /dev/sdl1 *** Some devices missing Label: none uuid: dfa50f2a-9787-4d7a-88a5-7760f6b2e8a6 Total devices 1 FS bytes used 1.62GiB devid 1 size 20.00GiB used 5.02GiB path /dev/loop2 Label: none uuid: df5fea13-a625-4b37-b7c2-7fcc3328bc65 Total devices 1 FS bytes used 604.00KiB devid 1 size 1.00GiB used 398.38MiB path /dev/loop3 I still need to do a new config to move into ghost devices 1 and 5, I guess, but there's no hurry for that, is there?