docgyver

Community Developer
  • Posts

    44
  • Joined

  • Last visited

1 Follower

Converted

  • Gender
    Male
  • Location
    Atlanta, GA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

docgyver's Achievements

Rookie

Rookie (2/14)

8

Reputation

  1. @Pillendreher, I installed 6.10.0-RC2 and have had no issues logging in as either root or my test user. Both public key (root and testuser) and password (testuser) authentication works. I saw in the release thread that some older ssh clients may have issues connecting using ssh-rsa key types. I didn't drill down on the comment so I may not have that 100% correct so please do your own due diligence if it looks like you have pub key issues. I also read that new systems have SSH disabled by default. Your system logs suggest you have enabled it or that this isn't a new install. I mention it only in case ssh somehow got disabled in the Settings:Management-Access tab. I hope to have a test bed in place some time today so I can try a fresh install of 6.10 and see if that behaves differently than an upgraded system. In addition to checking for an AllowUsers config entry please let me know if you installed 6.10.0-RC2 on a fresh system or was it an upgrade? Upgrades can be a little problematic since things like the AllowUsers line might not exist in an sshd_config which has been persisted from an older version of unRaid. Thanks in advance for your reply and also for your patience and help as I work through how to robustly handle sshd_configs coming from older initial builds. P.S. I just re-read your OP and saw your speculation you can't have both root and non-root. That is not the case. The plugin supports root plus any number of (selected/enabled) non-root users.
  2. Thanks for your thorough information. Most of the questions that immediately came to mind were answered from the logs and picture. Biggest concern was you might have an old copy of the plugin but that isn't the case. I haven't upgraded to 6.10 yet but have been working on a testbed so that I can start getting ahead of the testing without having to update my primary system. Having Plex down is not "wife friendly" That said I may take the risk and update to see what happens. In the mean time please look at /etc/ssh/sshd_config for a line "AllowUsers..." and see if it has anything besides root in it. Given your example I would expect it to look like: Allowusers root unraidssh It should also be persisted in /boot/config/ssh/sshd_config. If you see the either line missing or the unraidssh user missing from either please try to "refresh" the plugin. I think just using the restart ssh button in the plugin should work but to be sure, unselect and reselect the user you want to enable then click apply. I'll follow up shortly. To the point made by @bonienl, based on your post you know that normally only root can login. This plugin is, as you infer, meant to allow you to enable additional users. I'll get it working with 6.10 and/or help you work out why it isn't working for you.
  3. Your use case shouldn't need a manual/command-line created user. What you are missing is the fix I just put in to a "regression error". Version 6.9+ of unRaid does some additional work to secure SSH. Part of that includes restricting ssh to only root. I added some updates to work with that change but I was careless about how I put in a change after that fix and went back to the old behavior. Please also see some of my in-line comments to your various questions
  4. That message is coming from some of the new code. I'll see if I can suppress it. I'm extracting sshd_config from unraid's initrd (root) volume so that I can (eventually) compare that baseline/default copy against what is persisted on the /boot usb drive. My high-level thinking is to produce INFO and possibly WARN level messages when there are differences which could produce unexpected behavior.
  5. In case github didn't tell you, I merged your change then also updated the version to 2021.12.01.1 since it was the same day. Fair chance no one pulled the 2021.12.01 version but better safe than sorry. It is very strange that it needed the usercontent url since, I think, it has been the other way since I took over years ago. Regardless, thanks for the extra effort in triage and the fix.
  6. @xxlbug Sorry about that chief. bug in code which isn't even being used yet. Sorry about the trouble. Please try an update or if it never installed to install from scratch with 2021.12.01
  7. Gotta love continuous integration Thanks for the heads up. was testing a new GIT setup and hadn't tested yet. I've wrapped it with CDATA now. BTW, I haven't seen any issues between v6.9 and the denyhosts plugin so you can safely lift the version block.
  8. I guess I've been asleep at the helm and missed the issues that arose with the v6.9 changes. My own use case doesn't involve logging in as non-root. Instead I have non-root users setup with authorized_keys files with a command="some command" stanza and was not aware of the change to /etc/profile until today (2021/11/01). Likewise I wasn't aware of how sshd_config was changed to disallow non-root from logging in. In truth I still don't know since I am waiting for a parity check to finish so I can check what the bzimage has for sshd_config. Once I have that, I will modify the AllowUsers/DenyUsers (or Group variants) used by the default config to allow the users selected in the interface. I'm torn between a couple of thoughts regarding how to address the issue caused by the "HOME=/root" clause in /etc/profile. I've already experimented with adding a script to /etc/profile.d which resets the HOME variable. This gets HOME correct but still results in the permission error due to the cd $HOME line. Using a sed edit in place to modify /etc/profile to skip setting HOME for users selected in the ssh plugin. I'm loath to change /etc/profile for some reason even though I'm already changing /etc/ssh/sshd_config. The /etc/profile.d/ssh_config.sh script approach has a great appeal but leaves us with the error from the change directory command. I'm at best days (vs. hours or weeks) away from making a change so will monitor this thread for any feedback. I haven't found any plugins which make changes to /etc/profile.d no any guidance either. I want to avoid collisions with unRaid scripts in the future so if anyone knows a "standard" let me know. If I don't hear anything AND go with the /etc/profile.d approach I'll likely call the script plg_ssh-config.sh. Note I am likely going to change the name of the plugin from the simple "ssh" to "ssh_config" to improve clarity on what it does. doc..
  9. *short version* or <tl;dr;> unraid explicitly sets HOME=/root in /etc/profile followed by "cd $HOME". If /root is mode 755 users wind up in /root when they login. If /root is mode 700 you get the behavior you report. I need to create a fresh USB boot and see how v6.9 and v6.9.3 act out of the box. I figured out a "fix" but need to find and read plugin best practices before I implement it. *Long Version* Oddly enough I rarely login interactively to a user account. Most of the users I create have authorized_keys files/entries which run a specific application. e.g. rsync server. This allows me to have workstation backups push backups to unraid without giving an interactive shell. That said, I just did some poking around and found that /etc/profile explicitly sets "HOME=/root" then does a "cd $HOME". That in itself didn't initially generate the error you see but my /root folder was mod 755 and yours might be a more appropriate 700.Once When I set /root to 700 I get the same messages you do. I need to try a fresh install to see if 700 is the new default and/or figure out why mine was 755. I thought it might have been a side effect of the plugin prior to my changing to handle ver6.9 but I only chmod the .ssh folder and files. You will find that "echo $HOME" will return /root which will not end well for many things. Even a simple "cd ~" will try to go to /root and fail. I've got an idea for a work around but need to find out plugin best practice for adding something to /etc/profile.d. My "fix" will still result in the "cd..permission denied" error when logging in but HOME will be set correctly. In the mean time if you want to do the fix yourself then create a file called: /etc/profile.d/ssh_config.sh The content should be export HOME=/home/$USER Then do "chmod +x /etc/profile.d/ssh_config.sh" to make it executable. The file will go away on reboot since the /etc folder is memory mapped (not persisted). Hopefully I can find out an/the acceptable naming scheme for profile.d scripts before your next boot. Main thing I want is to avoid future collisions with unraid scripts.
  10. tl;dr; Things that might help. Please read below for a better understanding of how each thing might help Check sshd_config for AllowUser, DenyUser, AllowGroup, DenyGroup config entries and adjust as necessary reset ssh plugin backup /boot/config/plugins/ssh/<user folders> go to plugins tab and remove ssh plugin Add plugin back Restore user folders I'll take a shot at the first question but need to know more about your setup. First, as a complete guess it sounds like some 6.9.x behavior might have kicked in for you. Starting from 6.9.0 (from the release notes) "only root user is permitted to login via ssh". I'm not sure how that is being done but guess it is either with AllowUser, AllowGroup, DenyUser, and/or DenyGroup configuration items in sshd_config. I don't have any of those but my config would not have been changed by the upgrade. Only new systems would have it blocked. If one of my plugins is causing the problem, then is it "ssh.plg" or "denyhosts.plg"? Deny hosts is more apt to block connections entirely (dropped before password prompt). For this reason I doubt it is Denyhosts. Problems with the ssh plugin tend to be either breaking the ability to login as root (not what you are seeing) or user keys not working. Your experiment with a correct vs. incorrect password implies you aren't using keys either. As to the second question, deny hosts uses /boot/config/plugins/denyhosts/denyhosts.cfg for its basic config. The default "working" directory defined in that config file is /usr/local/denyhosts. That folder is volatile memory and will go away on it's own. If you changed your working directory to a persistent store (e.g. /mnt/cache/apps) then you will need to clean that out manually. Similarly, ssh.plg uses /boot/config/plugins/ssh/ssh.cfg for its basic config. User keys are stored in /boot/config/plugins/ssh/<user folder>. If you use the "remove" option for either plugin then you start completely fresh. The ssh remove will remove /boot/config/plugins/ssh which includes the user key folders. Back it up if you don't have those public keys somewhere else. Though I say "completely fresh" there is one thing which will persist even after a remove. The ssh config plugin persists changes to sshd_config to the copy managed by unraid itself. If you made a change, say to the port, then that change will remain even after removing the plugin. That is especially important if PERMIT ROOT is set to NO. Removing the plugin would then require you to edit the sshd_config file manually. Hint: you might need to enable telnet to login if /boot is not exported.
  11. I just updated the plugin to check for ver6.9 or greater and skip the copy of root's authorized keys for those versions. For now I am going to leave the /boot/config/plugins/ssh/root/.ssh folder in place in case someone needs to copy something out of it. I'm working on some other updates and will likely include a rename of the above folder in an upcoming update.
  12. For now let's restrict it to 6.8.x. I need to dig into how to make them compatible and I'm not sure when I can do that. I still think there could be value for it for those who want to allow non-root to use ssh. I have been in a private conversation with another unraid user who found a couple of input sanitization issues in Beets (unlikely to fix as it is still v5 code I inherited/took over) and DenyHosts. The issue can only be exploited by someone who is already in the management interface and thus has "root" but it is worthy of a fix. I'll look at both of these as soon as I can. So many hobbies...
  13. Thanks so much for the effort. Heads down on some certification course right now but will add the >= putty version check soon and will look into what is needed for the cert stuff and maybe even switch to grabbing putty from the official site and not keep it as part of the plugin itself.
  14. Currently the plugin only copies the authorized_keys file from /boot/config/plugins/<user>/.ssh folders to ~<user>/.ssh folders. I have considered adding pub/priv key pairs which have been placed in the /boot folders but would want to give serious thought to any security implications. It is likely no more of an attack surface but still not something I want to do lightly. In the short term what I would do is store your private key file either on the flash drive or, since you want to do this on array start, a place you think safe on the array. For example's sake lets say /tmp/id_rsa. Then you can tell your script to use that id with "-i /tmp/id_rsa". Your command would then become: ssh -i /tmp/id_rsa admin@10.20.20.1 /etc/rc.halt Of course /tmp is wiped on boot so a better choice would be somewhere on /mnt/user
  15. You can copy the same public key under multiple users on the unraid side. If you have already added the 2nd (non-root) user to the list in the ssh plugin then there should be a path /boot/config/plugins/ssh/<user>/.ssh with a readme.txt file in it. If not then add the user in the plugin UI and hit apply. Once you have the path created you can copy the authorized_keys file from the /boot/.../root/.ssh folder to the parallel folder of the new user. That gets it "persisted" on the USB drive. Now to get it into /home/<user> with the right owner and permissions you just need to force the plugin to refresh things. Least impactful way is to go into the plugin UI and toggle a user and hit apply. A stop+start cycle or a reboot would also do the trick.