Jump to content

elbweb

Members
  • Content Count

    5
  • Joined

  • Last visited

Community Reputation

1 Neutral

About elbweb

  • Rank
    Newbie
  1. Hello! Recently I've had Fix Common Problems plugin let me know about a hacking attempt. That's definitely what the logs imply. Looks like someone was attempting to connect via a bunch of standard / known users and passwords (this was repeated over two days, and 100's of times a day with similar information): Jan 9 02:22:03 Tower sshd[1979]: Failed password for mysql from 91.xxx.x.x port 56816 ssh2 Jan 9 02:22:03 Tower sshd[1979]: Connection closed by authenticating user mysql 91.xxx.x.x port 56816 [preauth] Jan 9 04:43:27 Tower sshd[130858]: Invalid user nginx from 91.xxx.x.x port 52020 Jan 9 04:43:27 Tower sshd[130858]: error: Could not get shadow information for NOUSER Jan 9 04:43:27 Tower sshd[130858]: Failed password for invalid user nginx from 91.xxx.x.x port 52020 ssh2 Jan 9 04:43:27 Tower sshd[130858]: Connection closed by invalid user nginx 91.xxx.x.x port 52020 [preauth] The part that I don't understand is the ports, and what this log really means. My server is exposed to the internet, but only on a non-standard port that is forwarded to SSH, and port 80 (redirected to 443)/443. One of the port 443 redirects goes to the unraid web portal, but hidden behind an NGINX auth - on top of the unraid auth itself. So, my question - how was a login attempt made for these different ports? Beyond taking the access that I have down, what else should I be doing to limit this? Thanks!
  2. Hello! A little bit ago (about 6 weeks) I added a few drives, doubled the memory, added two GPUs, and a third NVME to the cache array. Since then (I've just now noticed) my parity checks are getting errors. I'm wondering what the best way to track down what's causing these errors? Seems like I should be able to use the smart information but.. any way to do that server wide without going into each report manually? The server details: 2920x, 64GB ram, 12 8tb disks (1 parity, 1 precleared to swap), 3 cache nvme (512 + 512 + 1024)
  3. So - I went through and disabled one of the states in my bios and haven't had the problem again yet. No idea if that's the solution but so far so good. Thanks @johnnie.black
  4. Cache drive isn't full - it has about 400GB free. From what I read about the 'error' in the log it's more about writes being directed directly to the drive for one reason or another, instead of being in the cache and the migrated. I've since turned off caching for the two shares that are stored on disk. It's a clean break - shares are either in cache or on disk. Thanks! I'll look into this! It's hard to tell since I can't 'trigger' it to fail, but, it can't hurt!
  5. Hello! I've been running unraid for a bit now and generally loving it. I have a problem where, every few days, the whole machine will stop responding. I have it plugged into an external monitor and keyboard. When it enters this state not even numlock will toggle on the keyboard. I piped the system logs to the flash drive in hopes of getting more information (as the logs shown while it was running were fine). This is the last bit of the log which doesn't seem interesting at all: Jun 5 01:54:47 Tower shfs: share cache full Jun 5 01:55:19 Tower shfs: share cache full Jun 5 02:47:07 Tower shfs: share cache full Jun 5 02:48:37 Tower shfs: share cache full Jun 5 03:00:01 Tower Plugin Auto Update: Checking for available plugin updates Jun 5 03:00:05 Tower Plugin Auto Update: Community Applications Plugin Auto Update finished Jun 5 03:40:13 Tower crond[2476]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null From the looks of the log file itself it was last written to about an hour after the last log entry. When it comes online it takes about 12 hours to verify the array if there is no usage, 24-36 while the server gets used. Seems normal but I'd like to avoid this all together. I have the 'performance' mode turned on. General hardware: AMD 2920X, 32GB ram, 8x8TB hdd, 2x512GB nvme cache. My downloads, appdata, domains, system and isos all live on cache only. Any help would be appreciated! tower-diagnostics-20190605-1333.zip