ken-ji

Members
  • Posts

    1238
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by ken-ji

  1. I recommend sticking all your containers that are grouped together in their own VLAN, and have your router restrict access to the VLAN This is probably the easiest way to achieve what you want since macvlan networking is like running a virtual switch on the br0.10 interface so everybody on VLAN10 would be able to see the traffic
  2. It is... this is a snippet of my nginx reverse proxy running on VLAN3 with its own IP. Unraid cannot be accessed from VLAN3, only nginx and any related containers on that same interface. It's up to my router to restrict access to/from other VLANs
  3. Hmm... containers in bridge networks are by default accessible by all the assigned IP of all the interfaces of the host. This is the default docker behavior and Unraid did not support the advanced mode here. However, think about it a bit. If you assign an ip on VLAN10 to Unraid, the containers can now be accessed on this IP in VLAN 10, but Unraid services become accessible as well. If there is no IP as you have it now, all containers on bridge networks, will only be accessible via the VLAN1 ip address So the best answer for a container to be accessible only from VLAN10, you would need to do custom docker networks and assign the desired IP (or dynamically via Docker) to the container. I think you already have this setup. Is there any reason you must run with a bridge network rather than a macvlan?
  4. Odd. so if you do not have an IP address assigned to eth0.10, Unraid can still be reached via that VLAN? How exactly? are you just not confusing accessing it over your router? client (VLAN 10) -> router (VLAN 10) -> router (VLAN1) -> unraid(VLAN1/br0)
  5. Looks about right (for 6.8.3) though it is unnecessarily convoluted and its wrong for 6.9 as discussed above. So just put your authorized_keys file in /root/.ssh (which should now be symlinked to /boot/config/ssh/root) make a copy of /etc/ssh/sshd_config in boot/config/ssh and edit that then restart sshd with /etc/rc.d/rc.sshd restart which should make the new changes active
  6. Also, since you mentioned failover, the 10gbit NIC and the 1gbit onboard are probably bonded (as part of the default config), the default bond uses the least common speed of all active ports. You should break it up into their own ethX devices
  7. You also need to set # To disable tunneled clear text passwords, change to no here! PasswordAuthentication no To disable password login or since Limetech insists that only the root user be used PermitRootLogin prohibit-password #PermitRootLogin yes
  8. Shutdown Unraid by Issuing poweroff command on the terminal or press the power button for 1-2s Get the flash drive and delete config/network.cfg using another machine This will reset your network settings completely back to the initial config (DHCP, bonded)
  9. Hmmm. you'll need to validate that the LSI card you have is actually working as you just acquired it to replace the Highpoint 2722.Maybe test the LSI on another machine if you have any with the shelf connected.
  10. During boot up, the LSI should be prompting to press Ctrl-A to enter the card setup utility, does this not work on your QNAP?
  11. You're using the 8e (external), How is it connected to your HDDs? Are you using an Shelf/JBOD enclosure? SFF-8088->SFF-8088 is the usual here or are you feeding the cables back in SFF-8088->SATA? make sure you are using forward breakout cables as these are one directional and you can't use the other one (backward breakout) which is meant for controller/Motherboard SATA -> SAS enclosure (SFF-8088)
  12. Are you able to SSH into Unraid itself? if so you might be able to run diagnostics command and scp the output zip file so people can tell you what's up. Also bear in mind that if you have exposed the web ui (port 80/443) or the ssh port 22 to the internet and don't have a decently strong password for the root account, there's a good chance your server has been compromised, which maybe why you can't connect to the web interface. (just exposing it to the internet even with a great strong password is not a guarantee that its been compromised.) Finally you can opt to try restarting the server with the reboot command
  13. You can check if the LSI card does see HDD during POST by accessing the card's utility (Ctrl-A during bootup) Sometimes the settings on the card might have some of the ports disabled
  14. There's a work around you can use if its acceptable to you: samba has username mapping merge this into /boot/config/smb-extra.conf (\\tower\flash\config\smb-extra.conf) - you create the file if it doesn't exist or you add the username map line under the [global] section [global] username map = /boot/config/smb-users.map then create /boot/config/smb-users.map (\\tower\flash\config\smb-users.map) user_name = user.name user_name is the name you have created in Unraid user.name is the name you have in the windows machines So two clients with both usernames will be treated as the same user (same password and permissions <- this is if acceptable to you. This should take effect immediately, or by restarting the samba service (/etc/rc.d/rc.samba restart) or by stopping and starting the array
  15. Well, the changes in 6.9.0 for SSH just mainly allows the ssh keys to persist across reboots without any user intervention. This eliminates the need for the hacks we've all created and pooled together to copy keys from storage during bootup. However, Unraid still does not have any GUI tools for generating the keys, but for most users using ssh, they already know how to do this on the CLI. A quick way (if you are using Windows) is to download Putty from here https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html (or even just puttygen.exe) and use that to generate the key you need (exporting it in openssh format) Also SFTP is builtin in to most NAS/servers that have SSH enabled And if you want to get started but are bothered by the complexity of the SSH keys thing, you can encode your remote password into the rclone config (though most security experts would frown on this)
  16. only the files that should be in /root/.ssh should be in /boot/config/ssh/root. All other files will be copied into /etc/ssh whenever the sshd service is started/restarted, all the files in /boot/config/ssh are copied to /etc/ssh (non recursively) before sshd is started ssh key-only login was never enabled by default. To enable this, you copy /etc/ssh/sshd_config to /boot/config/ssh and edit that one.
  17. I am not sure about writes, but while it is not connected, it will definitely be writing out the connect to an account message to the docker log for the container.
  18. This is just sample code, but it can go into your script at line 2 (first possible code line) what the 1st line does is check if there exists a file /var/lock/script.lock (note the typo) and if it does, it prints "Still running" for the user reference and exits the script (lines 2 and 3) If the script still keeps running beyond the check (ie the file doesn't exist) it will then create the file and the script can do whatever you want. Then at the end of the script, the command I showed will forcibly delete any existing /var/lock/script.lock. All in all this means that when the script starts up, it checks if the lock file exists and will abort if it exists. this means a previously running script is still running (or crashed/aborted) and the script will no longer run until the lock file has been deleted. When the script does run, the last thing it will do is delete the lock file, thus allowing the script to run again.
  19. I'm simply running multiple root shells under tmux with each shell in its own directory. I think that's a fair use of unraid, without going into the whole other users can of worms. So you can imagine my surprise when a simple login shell refused to open in the specific directory I had it open.
  20. @Gico Do you know which client on your network is 192.168.168.10? its connecting to your server repeatedly via /login and nginx is complaining about it.
  21. No these are host keys - what the server uses to identify itself to the user - and all the files in /boot/config/ssh will be installed into /etc/ssh upon startup of the ssh server. If the ssh server starts up and cannot find these host keys, it will generate new ones and scare anybody trying to ssh in with a warning message about possible Man in the Middle Attacks due to host key mismatch. Needless to say, the new ones will be saved to /boot/config/ssh as well. For those who use the ssh plugin or know what they are doing, the configuration of the ssh service can be changed and persisted by copying the modified /etc/ssh/sshd_config file here.
  22. Don't touch those as they are the SSh host keys (deleting them will regenerate them on sshd restart) If they are regenerated, you'll get warning about Man In The Middle attacks (ssh will consider your Unraid host as never before seen)
  23. Well as it stands, you should enable root as that's the supported unraid way.
  24. well there's currently some code in /etc/profile that actually prevents non-normal users from getting via ssh (rather you can't spawn a login shell without encountering issues) so I think a review of the multi user stand of unraid vs your needs is in order. (I'm not advocating either way, just that there shouldn't be changes that break standard functionality)
  25. Upgraded from 6.9.0 and seems to have resolved the issues with 6.9.0 There some things that could still use fixing, like /etc/profile forcing bash to start in /root, which breaking some of my tmux scripts Also, I personally don't think its neccessary to force 777 access to the ./dev/dri, but I understand why you would want to take the easy way out and make things easier for users. Though it should be up to the docker image creators/maintainers to account for this. It simply is not the Unix mentality to just willy nilly assign all access to entire directories (I'm looking at the default/recommended Unraid file permissions) nor is it a good security practice.