joshstrange

Members
  • Posts

    59
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • URL
    https://joshstrange.com
  • Personal Text
    3x Unraid servers (50TB, 40TB, 20TB)

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

joshstrange's Achievements

Rookie

Rookie (2/14)

3

Reputation

  1. So I just found this thread and removing/commenting out `PermitRootLogin yes` does prevent password login. SSH will still prompt for the password (odd, I've never seen this before when I've setup a server to be key-only) but it will not accept the correct root password. I guess this is better than nothing but it's frustrating that I had this working perfectly before upgrading and now sshd doesn't appear to respect the config I give it (notably `PubkeyAuthentication yes` and `PasswordAuthentication no`).
  2. Sorry I missed this reply, unfortunately it doesn't solve my problem. It may make it so I can login with my keys (I already have that working) but it does nothing to prevent passwords when logging in. As in I can still `ssh root@MYIP` and get a password prompt that takes my password and logs me in. I want to completely prevent that. Make SSH key-ONLY. It's very odd to me that I've edited the `/etc/ssh/sshd_config` file and told it to not allow PasswordAuthentication and yet SSH still works without a key.
  3. I know I had this working on <6.9 but I built a new UnRaid box recently, installed 6.10, and I think it broke. In my /boot/config/go file I have: mkdir -p /root/.ssh cp /boot/custom/ssh/* /root/.ssh chmod 700 /root/.ssh chmod 600 /root/.ssh/* echo "PubkeyAuthentication yes" >> /etc/ssh/sshd_config echo "PasswordAuthentication no" >> /etc/ssh/sshd_config /etc/rc.d/rc.sshd restart Which should take my authorized keys file in /boot/custom/ssh/, move it to the right place, set the correct permissions, disable password login, and restart the ssh daemon. It does correctly allow me to login with a key but I can also still login with my password which I do not want. In past versions of UnRaid I know this configuration worked so I'm confused as to what I'm doing wrong this time around. I also tried uncommenting the: PermitRootLogin prohibit-password Line and restarting ssh but it didn't help. Any assistance would be greatly appreciated. Thank you!
  4. Sorry, I should have added a diagnostic from the start. I only have one from 6.6.7, I stupidly forgot to do one on 6.9.2 before I downgraded. If needed I can attempt another upgrade since I now have a clean up/downgrade path tower-diagnostics-20211028-1027.zip
  5. After one my SATA cables died to one of my cache drives (something that UnRaid did not alert me to or alter the UI, it showed a gray ball next to the drive and reported no errors despite /var/log/syslog screaming about superblock issues) I decided to upgrade UnRaid finally. I upgraded to the newest version and while I could connect to it from my local network I was unable to access the internet from the UnRaid box (it could still see/ping everything on the local network). My DNS servers were set to 8.8.8.8/8.8.4.4. It appears that in the upgrade my default route was set to go through br1 (on 6.6.7 it is showing br0) and I think that was my issue but I wasn't clear on how to change that safely and how to revert back to br1 if my guess that it should be br0 was wrong. After a failed attempt to update the DNS and having to pull the USB drive off the motherboard to manually edit the network config I wasn't in a mood to attempt any new network-related fixes so I downgraded. Apart from the panic when it didn't show my cache drives and pretended like they were new drives I was able to get back up and running on 6.6.7. My questions is: Why did UnRaid change my default route to br1 in the upgrade and if this is the root of my problem, like I think it is, how do I safely switch from br1 to br0? I have included a screenshot of 6.6.7 (working) config. I sadly didn't screenshot the 6.9.2 config but I can confirm it said it was going through br1 for the default traffic and on the command line when I listed the routes (`ip route` is what I think I ran) it showed the br1 as "linkdown".
  6. Wow, I could kiss you all. I was 100% ready to write off this data (and that still may need to happen) but after replacing the sata cable disk 5 is showing up and I can see data on it. I'm starting another rebuild so wish me luck. THANK YOU, THANK YOU, THANK YOU. I've had sata cables go bad before but I was sure this was 100% my fault (still a true statement) and I had 2 drives on their last leg (again still could be the case but at least now I have some hope). Thank you again and I hope in a day or so I'll have it all rebuilt!
  7. Ok, I ended up just holding down the power button to kill it. Here are the diagnostics. When I opened up the machine drive 5 (the second failing drive) looked like the sata power cable was ajar but I can't be sure if that was from me removing the sata data cable. After boot the drive isn't showing up at all now so I'm shutting it back down to replace the sata data cable but I grabbed diagnostics first. tower-diagnostics-20200401-1257.zip
  8. Ok I tried: root@Tower:~# poweroff -f And got no output/response and the machine is still up (pingable/sshable) a few minutes later. Am I not waiting long enough or is it hung? I can see that 2 instances of "shutdown -h 0 w", 1 instance of "poweroff -f", and 1 instance of "/usr/local/sbin/emhttpd" are all stuck in uninterruptible sleep mode "D" using htop.
  9. Tried that but after 8 minutes it doesn't appear to be shutting down. Normally I would tail the syslog to figure out what the issue was but it's not updating due to it not having any space... root@Tower:~# poweroff Broadcast message from root@Tower (pts/1) (Wed Apr 1 11:52:26 2020): The system is going down for system halt NOW!
  10. Hmm, ok I can't stop the rebuild. When I click to cancel and then confirm it makes a request to /update.htm with the following form data: startState:STARTED file: csrf_token: A202<REMOVED>1BDA43 cmdNoCheck: Cancel but it just hangs ("pending") and never completes. What is the safest way for me to take down this machine or is pulling the power my only option? EDIT: It times out (504) after a while
  11. On it! Thank you guys. I really appreciate the responses while I'm in panic mode, it helps a lot!
  12. Crap, I just looked and /var/log is full so idk if that's what is causing the issue (my /var/log is 2GB in size for reference)
  13. Well I was going to attach them but it's been working on it for 20min now, I'm going to leave the page open but should I expect it to ever finish? Also here is an update on the rebuild process
  14. Let me start of by saying no, I did not do regular parity checks, no I didn't take good inventory of my data to know what I've lost, yes I understand if you don't a backup then that's your own fault, and yes, I am an idiot. Now that that's out of the way. I had a data disk go bad in my array, I got a new drive put in mid-day yesterday and then right before I went to sleep I checked the progress one last time and saw this (first image). I knew I was screwed but couldn't deal with it last night. This morning it looks like this (second image). I'm not going to beg for ways to save the data (I understand it's gone). What I want to do is stem the tide of damage. Should I kill the rebuild and just write off 10TB (old drive was 5TB that I replaced)? How can I do that while saving the remaining data? Again, I know this is my own fault and I've lost a good chunk of data but I would really appreciate any help in saving whatever I can.
  15. I love the peace of mind Unraid gives me with my data. It’s hard to narrow it down to just 1 thing I want to see in 2020 so I won’t: 1. User shares spanning multiple servers (ideally docker-swarm-like capabilities as well) 2. VPN support baked in with a toggle on a per container/VM basis