ken-ji

Members
  • Posts

    1245
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by ken-ji

  1. It'll be gone upon reboot it has no imapct whatsoever as far as I can tell. and if you want to remove it the correct command would be rmdir /usr/local/emhttp/-
  2. Chiming in that I'm running a i7-7700 also with UHD630 iGPU using it with Emby with iGPU transcoding as well and my server's rock stable (Unraid bugs not withstanding) - I've only rebooted it to enable VFIO binding and recover from a bad package install (newer Slackware packages don't work as they updated glibc but Limetech didn't)
  3. @dlandon Minor quibble: You have this line in the plg file mkdir - /tmp/&name;/scripts it creating a '-' directory in /usr/local/emhttp
  4. @limetech Seems like a bug with shfs itself: root@MediaStore:/mnt/user/Downloads# echo x > a root@MediaStore:/mnt/user/Downloads# chmod 600 a root@MediaStore:/mnt/user/Downloads# ls -l a -rw------- 1 root root 2 Mar 23 21:44 a root@MediaStore:/mnt/user/Downloads# ls -l /mnt/cache/Downloads/a -rw------- 1 root root 2 Mar 23 21:44 /mnt/cache/Downloads/a root@MediaStore:/mnt/user/Downloads# cat a x root@MediaStore:/mnt/user/Downloads# su nobody -s /bin/sh nobody@MediaStore:/mnt/user/Downloads$ cat a x nobody@MediaStore:/mnt/user/Downloads$ cat /mnt/cache/Downloads/a cat: /mnt/cache/Downloads/a: Permission denied nobody@MediaStore:/mnt/user/Downloads$ echo y > a nobody@MediaStore:/mnt/user/Downloads$ cat a y I hope this is not directly caused by Limetech's stance that Unraid is an appliance and should only have the root user. As this can cause breakage with NFS shares, ransomware protection and general stuff and use cases that expect file permissions to protect stuff
  5. Seems like the bug is on the shfs itself. root@MediaStore:/mnt/user/Downloads# echo x > a root@MediaStore:/mnt/user/Downloads# chmod 600 a root@MediaStore:/mnt/user/Downloads# ls -l a -rw------- 1 root root 2 Mar 23 21:44 a root@MediaStore:/mnt/user/Downloads# ls -l /mnt/cache/Downloads/a -rw------- 1 root root 2 Mar 23 21:44 /mnt/cache/Downloads/a root@MediaStore:/mnt/user/Downloads# cat a x root@MediaStore:/mnt/user/Downloads# su nobody -s /bin/sh nobody@MediaStore:/mnt/user/Downloads$ cat a x nobody@MediaStore:/mnt/user/Downloads$ cat /mnt/cache/Downloads/a cat: /mnt/cache/Downloads/a: Permission denied
  6. I recommend sticking all your containers that are grouped together in their own VLAN, and have your router restrict access to the VLAN This is probably the easiest way to achieve what you want since macvlan networking is like running a virtual switch on the br0.10 interface so everybody on VLAN10 would be able to see the traffic
  7. It is... this is a snippet of my nginx reverse proxy running on VLAN3 with its own IP. Unraid cannot be accessed from VLAN3, only nginx and any related containers on that same interface. It's up to my router to restrict access to/from other VLANs
  8. Hmm... containers in bridge networks are by default accessible by all the assigned IP of all the interfaces of the host. This is the default docker behavior and Unraid did not support the advanced mode here. However, think about it a bit. If you assign an ip on VLAN10 to Unraid, the containers can now be accessed on this IP in VLAN 10, but Unraid services become accessible as well. If there is no IP as you have it now, all containers on bridge networks, will only be accessible via the VLAN1 ip address So the best answer for a container to be accessible only from VLAN10, you would need to do custom docker networks and assign the desired IP (or dynamically via Docker) to the container. I think you already have this setup. Is there any reason you must run with a bridge network rather than a macvlan?
  9. Odd. so if you do not have an IP address assigned to eth0.10, Unraid can still be reached via that VLAN? How exactly? are you just not confusing accessing it over your router? client (VLAN 10) -> router (VLAN 10) -> router (VLAN1) -> unraid(VLAN1/br0)
  10. Looks about right (for 6.8.3) though it is unnecessarily convoluted and its wrong for 6.9 as discussed above. So just put your authorized_keys file in /root/.ssh (which should now be symlinked to /boot/config/ssh/root) make a copy of /etc/ssh/sshd_config in boot/config/ssh and edit that then restart sshd with /etc/rc.d/rc.sshd restart which should make the new changes active
  11. Also, since you mentioned failover, the 10gbit NIC and the 1gbit onboard are probably bonded (as part of the default config), the default bond uses the least common speed of all active ports. You should break it up into their own ethX devices
  12. You also need to set # To disable tunneled clear text passwords, change to no here! PasswordAuthentication no To disable password login or since Limetech insists that only the root user be used PermitRootLogin prohibit-password #PermitRootLogin yes
  13. Shutdown Unraid by Issuing poweroff command on the terminal or press the power button for 1-2s Get the flash drive and delete config/network.cfg using another machine This will reset your network settings completely back to the initial config (DHCP, bonded)
  14. Hmmm. you'll need to validate that the LSI card you have is actually working as you just acquired it to replace the Highpoint 2722.Maybe test the LSI on another machine if you have any with the shelf connected.
  15. During boot up, the LSI should be prompting to press Ctrl-A to enter the card setup utility, does this not work on your QNAP?
  16. You're using the 8e (external), How is it connected to your HDDs? Are you using an Shelf/JBOD enclosure? SFF-8088->SFF-8088 is the usual here or are you feeding the cables back in SFF-8088->SATA? make sure you are using forward breakout cables as these are one directional and you can't use the other one (backward breakout) which is meant for controller/Motherboard SATA -> SAS enclosure (SFF-8088)
  17. Are you able to SSH into Unraid itself? if so you might be able to run diagnostics command and scp the output zip file so people can tell you what's up. Also bear in mind that if you have exposed the web ui (port 80/443) or the ssh port 22 to the internet and don't have a decently strong password for the root account, there's a good chance your server has been compromised, which maybe why you can't connect to the web interface. (just exposing it to the internet even with a great strong password is not a guarantee that its been compromised.) Finally you can opt to try restarting the server with the reboot command
  18. You can check if the LSI card does see HDD during POST by accessing the card's utility (Ctrl-A during bootup) Sometimes the settings on the card might have some of the ports disabled
  19. There's a work around you can use if its acceptable to you: samba has username mapping merge this into /boot/config/smb-extra.conf (\\tower\flash\config\smb-extra.conf) - you create the file if it doesn't exist or you add the username map line under the [global] section [global] username map = /boot/config/smb-users.map then create /boot/config/smb-users.map (\\tower\flash\config\smb-users.map) user_name = user.name user_name is the name you have created in Unraid user.name is the name you have in the windows machines So two clients with both usernames will be treated as the same user (same password and permissions <- this is if acceptable to you. This should take effect immediately, or by restarting the samba service (/etc/rc.d/rc.samba restart) or by stopping and starting the array
  20. Well, the changes in 6.9.0 for SSH just mainly allows the ssh keys to persist across reboots without any user intervention. This eliminates the need for the hacks we've all created and pooled together to copy keys from storage during bootup. However, Unraid still does not have any GUI tools for generating the keys, but for most users using ssh, they already know how to do this on the CLI. A quick way (if you are using Windows) is to download Putty from here https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html (or even just puttygen.exe) and use that to generate the key you need (exporting it in openssh format) Also SFTP is builtin in to most NAS/servers that have SSH enabled And if you want to get started but are bothered by the complexity of the SSH keys thing, you can encode your remote password into the rclone config (though most security experts would frown on this)
  21. only the files that should be in /root/.ssh should be in /boot/config/ssh/root. All other files will be copied into /etc/ssh whenever the sshd service is started/restarted, all the files in /boot/config/ssh are copied to /etc/ssh (non recursively) before sshd is started ssh key-only login was never enabled by default. To enable this, you copy /etc/ssh/sshd_config to /boot/config/ssh and edit that one.
  22. I am not sure about writes, but while it is not connected, it will definitely be writing out the connect to an account message to the docker log for the container.
  23. This is just sample code, but it can go into your script at line 2 (first possible code line) what the 1st line does is check if there exists a file /var/lock/script.lock (note the typo) and if it does, it prints "Still running" for the user reference and exits the script (lines 2 and 3) If the script still keeps running beyond the check (ie the file doesn't exist) it will then create the file and the script can do whatever you want. Then at the end of the script, the command I showed will forcibly delete any existing /var/lock/script.lock. All in all this means that when the script starts up, it checks if the lock file exists and will abort if it exists. this means a previously running script is still running (or crashed/aborted) and the script will no longer run until the lock file has been deleted. When the script does run, the last thing it will do is delete the lock file, thus allowing the script to run again.
  24. I'm simply running multiple root shells under tmux with each shell in its own directory. I think that's a fair use of unraid, without going into the whole other users can of worms. So you can imagine my surprise when a simple login shell refused to open in the specific directory I had it open.