korpo53

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by korpo53

  1. This seems like it should be relatively simple, but it's stumping me. I moved my entire setup to new hardware, including a new HBA, new motherboard, etc. I kept my same flash drive, backed up everything on the flash, and wiped it and reloaded it with the intention of reimporting all my disks based on their serial numbers and such, since everything is moved around. The problem is, I can't seem to find my screenshot of which disks are which. How can I, ideally from the command line, identify which of my disks is the parity disk? I'm considering just making a bunch of mount points and mounting each disk as read only, and seeing which ones have useful data on them. But I have to imagine there's an easier way. I poked around my backup of my flash drive, but I'm not seeing a file that indicates which drive is mapped to which disk slot. Any ideas?
  2. You missed a slash there, it should be "cat /etc/resolv.conf" Google also says it might not work, and to try "scutil --dns" Try "netstat -nr". I don't really do Mac stuff, but Google tells me that should show your routes. Are you running a DNS server on .11 and your router on .1? Most people just point to their router's DNS, maybe that's a typo and the source of your problems? Missing bond0 and br0 is no big thing, it just means you didn't set up bonding and bridging.
  3. [*]To get any kind of benefit from unRAID, you're going to want to move that NTFS data off of those drives and onto the native unRAID stuff. Unassigned devices with NTFS is just a hack to let you migrate data, you really don't want to use it as your main storage. [*]Performance of what? For both Windows and unRAID, reads are pretty much going to be at wire speed: 125MB/s over gigabit. Writes will vary a lot on both sides, how you set it up determines which will be faster. Write speed is not unRAID's strong point. [*]Not relevant. You can make Windows or Linux boxes talk to Windows or Linux boxes, it's fairly easy in all directions. [*]A SC846 is a case, so we have no way to know what specific hardware you're going to stick in that case. If your CPU is dreadfully slow, it might slow things down trying to calculate parity and handle some of the downloader dockers, but we'd need more details on what you're going to put in the case. SATA1 shouldn't be a problem, it's still faster than wire speed.
  4. To put some numbers on it... The spec sheet for the drives in my server say they're 9W average power consumption. I have eight of them. The rule of thumb is that for most of the US: something running 24/7/365 costs you $1/yr/w. So, each of the 8 drives in my server costs me $9/yr, for a total of $72/yr, or $6/mo. Yeah, for that, I'll just leave them spinning.
  5. So, while your internet was broken, you couldn't get to: \\myservername\catpictures And you also couldn't get to: \\192.168.100.50\catpictures ? Can you get to it by IP right now? It sounds more like your routing/subnetting is broken than anything else, and that you're falling back to some kind of broadcast thing that's limping along. Can you paste the result of ifconfig on your server, as well as on your workstation? Just the IP and netmask, and just for the br0, bond0, and eth0 interfaces. The rest are probably veth and are for docker stuff. It would also be helpful to have the results of "cat /etc/resolv.conf" and "route -n". If you can get the same kind of info off your workstation too, that'd be good, but I don't know the commands on a Mac, if they're different.
  6. You can always try getting to shares by IP, just hit \\192.168.100.50\catpictures and that will tell you whether you're having a problem resolving names or not. If it's speedy, but going by name is slow, your name resolution is jacked. But no, it sounds like you have something screwed up on your local network. Internet access is not a requirement for SMB shares.
  7. I would expect a lot of people use a static address assignment for their servers, rather than DHCP. I use DHCP with a reservation myself, but setting it up manually isn't uncommon. This may be the case, but again I'd have to wonder what would be the reason to do this in unRAID configuration and not your router. There's no benefit that I can see of using an external DNS server if you have a working DNS resolver/forwarder on your router. I don't know what would happen to your unRAID box if it couldn't communicate with its DNS server temporarily, like if you broke your router, but it seems to recover gracefully if I unplug the network to move a cable and then plug it back in, so I'm guessing it doesn't flip out too much. However, the difference between my router with a cached entry (5ms) and Google's DNS (45ms) is so low that you wouldn't know the difference unless you were timing it. So unless your unRAID box has some need to resolve things that only your internal DNS server can resolve, there's no real harm in pointing it directly to the internet either. To put it another way: there's no harm in doing it either way, as long as you do it right. There are some benefits to doing it on the router, but they're fairly minor unless you start getting into fancier setups.
  8. I would expect a lot of people use a static address assignment for their servers, rather than DHCP. I use DHCP with a reservation myself, but setting it up manually isn't uncommon.
  9. I bought a few of the Dell T20s that were on sale recently for $130/ea. They can handle four drives.
  10. Bah, I had to hit "advanced view" in the upper right corner to show the edit button. So yeah, I was semi-blind. Thanks!
  11. I recently moved my docker stuff to a dedicated docker drive mounted at /mnt/disks/docker, so we have /mnt/disks/docker/appdata/plex, /mnt/disks/docker/appdata/nginx, etc. Now I'm getting this in FCP: I'm not seeing where I can change the AppData Config Path mount to be mounted RW:Slave, there's no edit button like there is for all the other mounts. Am I blind? Is it relatively safe to ignore this? I can't find much info on what the ramifications of RW vs. RW:Slave are.
  12. This is great! It let me get rid of my nginx/LE VM I was using, and it saved me trying to learn to do it with haproxy on my pfSense router . Here's my config, which is shamelessly stolen and modified from posts here in addition to what I was already running, and which might be of some help to people trying to piece one together. default server { listen 80; listen 443 ssl http2; server_name mysecretdomain.com www.mysecretdomain.com; include /config/nginx/proxy.conf; include /config/nginx/auth.conf; ssl_certificate /config/keys/fullchain.pem; ssl_certificate_key /config/keys/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; location / { proxy_pass http://192.168.100.50/; proxy_buffering off; } location /nzbget { proxy_pass http://192.168.100.50:6789; } location /couchpotato { proxy_pass http://192.168.100.50:5050/couchpotato; } location /sonarr { proxy_pass http://192.168.100.50:8989/sonarr; } location /plexpy { proxy_pass http://192.168.100.50:8181/plexpy; } } auth.conf satisfy any; allow 192.168.100.0/24; deny all; auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; proxy.conf client_max_body_size 0; client_body_buffer_size 128k; #Timeout if the real server is dead proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; # Advanced Proxy Config send_timeout 5m; proxy_read_timeout 240; proxy_send_timeout 240; proxy_connect_timeout 240; # Basic Proxy Config proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect http:// $scheme://; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_cache_bypass $cookie_session; proxy_no_cache $cookie_session; proxy_buffers 32 4k;
  13. Use Putty to remote into your server mkdir /boot/scripts wget https://raw.githubusercontent.com/trinapicot/unraid-diskmv/master/diskmv -O /boot/scripts/diskmv wget https://raw.githubusercontent.com/trinapicot/unraid-diskmv/master/consld8 -O /boot/scripts/consld8 echo "ln -s /boot/scripts/diskmv /usr/sbin" >> /boot/config/go echo "ln -s /boot/scripts/consld8 /usr/sbin" >> /boot/config/go That'll: Create a directory to hold the scripts that survives reboots. Download the scripts to that directory. Add lines to your startup script to create symlinks (shortcuts, sort of) to those permanent files in a non-permanent space that makes them more usable. After doing that, if you don't want to reboot, you can just run the below to create the symlinks right now. They'll get recreated every boot by the go script, so you only have to do this part the first time. ln -s /boot/scripts/diskmv /usr/sbin ln -s /boot/scripts/consld8 /usr/sbin
  14. What makes one VPN "better" than another in your eyes? OpenVPN works, the downside is that you need a client for it. L2TP/IPSEC doesn't generally need a client, but it's a pain to set up. There are more pros and cons of every VPN tech out there, a quick Google search should guide you in the right direction. As for doing it with a docker or via software or something vs. on an appliance or at the edge... up to you, there are again pros and cons of everything. I wouldn't run my business on a docker app with an OVPN thing running in it, but I wouldn't spend multiple hundreds of dollars to VPN to my home server either.
  15. There's a "Krusader" docker app, it's a gui file manager that seems to work well enough.
  16. In terms of functionality, if you're at a location where you can't use a VPN for one reason or another (can't install a client, ports blocked, DPI blocks it even over normal ports, etc.) then you've left yourself zero functionality. On the other hand, I've yet to see a place that blocks outgoing https and I've worked at everything from screwdriver shops to multinational banks and the DOE. To put it another way, the number of places I've been that allow VPN of any kind out is far, far less than the number of places that allow https out. In terms of security, sure, you could say that a cert-based VPN is more secure than a RP with password-based access... but we're talking about securing access to your torrent client, not Iranian nuclear secrets. I'm sure the NSA could defeat the fancy AES whatever encryption my RP is talking on, but I'm not super concerned about them trying to get in. I am concerned that some Korean script kiddie will try to exploit a known vulnerability in something like CouchPotato, so I put my SSL-only, authenticating, RP in as a roadblock. If that KSK finds a hole in my RP and gets through and exploits something in my CP docker? More power to him, he should go work for the NSA. Further, the principle of least access would dictate that if you only need access to a small list of services from the outside, then you should only give access to those services from the outside. I don't need to access my other machines, my printer, my ancient switch with an ancient and unpatched and insecure management interface, my webcams, SNMP, my router's management page, etc. on the outside. Exposing them in any way does me no good, so why would I do it via a VPN? Is the risk associated with the slightly lower security of a password-based RP vs. a cert-based VPN worth the higher risk of exposing all kinds of potentially unpatched crap to the internet? VPNs have their place, but a RP is a better way to allow secure access to potentially insecure pages from outside the LAN. That's why things like Netscalers, BIG-IPs, and TMGs exist and cost a lot of money. Is an Apache-based docker RP in the same class? No. Is my RP hiding my torrent client likely to get attacked as often as the web servers at my bank that are protected by BIG-IPs? No.
  17. Depending on your router, it may be possible to load a geo-based blocklist to block all IPs from Russia, China, etc. I don't have a cheapo home router so I can't speak to those, but it's fairly easy if you're using something nice or something software-based like pfSense. When you say "I do need remote access to the server"... what do you mean? For which services? It's fairly straightforward to set up either VPN or reverse proxy access to your services, which will go a long, long ways towards securing things.
  18. Anything is possible. You're going to need to be more specific as to what you're trying to do and why to get any useful help though.
  19. Reverse proxy is the way to go. Smdion's Apache RP docker plus a simple config plus a free SSL cert from StartSSL and you're good to go. It lets you disable the passwords on all those sites and just authenticate at the proxy instead. What it has over oVPN is that you don't need a client to access it. I can add a show to Sonarr from my work computer, for example.
  20. In general, jumbo frames don't get you much apart from lowered CPU usage under heavy load. Given that your box listed is a quad core 4.2Ghz thing, CPU usage is probably not a big deal.
  21. Probably. But it's also probably a very bad idea unless you have one of those fancy USB drives with an SSD controller built in. Standard USB drives are pretty dumb and not very good about wear leveling, whereas SSD controllers are very good at it. Also, even if you had a 16G or 32G flash drive, that's pretty small as far as cache drives go... considering you can get 128G real SSDs for like $50 these days.
  22. This says your i3-4130 (4146) is about 2/3 of my FX-6300 (6352), and my 6300 is plenty fast to run a rather big array (10 drives) and a whole bunch of dockers (7) and a few VMs (3) without breaking a sweat. The only thing I'd worry a bit about is if you had a bunch of people on Plex at once, since transcodes can really eat your CPU. If Plex is just for you, then you're probably good with your 4130 more or less forever.
  23. [*]I joined mine to my AD, and it was fairly straightforward. My only minor headache is that I had to remove the computer account from the domain from back in the day when I tried adding it to AD and remove it again for some reason, but that's all my fault. [*]It should stay joined to AD after a reboot, mine does. [*]The web UI on my server is probably 5x slower after joining to AD, but I don't see any reason it should be. I corrected a few minor setup problems when I did the upgrade to the final version of 6: added it to AD, added the UPS functionality, removed disk shares... and now the web UI just sits and sits for like 5s before changing tabs. The CPU is under 10% all the time, the memory usage is at 15%, and the actual sharing performance seems normal. Just the web UI, which is weird. A reboot didn't clear it. My only pointers are general ones: For the "default user/group" thing, pick something like Administrator and Domain Admins, or create users and groups for it and disable them/leave them empty. It doesn't appear you can remove those groups from the permissions, or if you do, they just come back. It also looks like you can't remove Everyone, nobody, users, CREATOR OWNER, and CREATOR GROUP from the ACLs. Lame, but maybe that's a Linux thing. My old Solaris server loved Windows ACLs. Create groups for each share and permissions on the share, assign things that way. Say your server is called UNRAID and you have a share called MOVIES, make groups called UNRAID-MOVIES-RWM and UNRAID-MOVIES-RO. Add RWM (actually FC, since unRAID doesn't seem to support the full set of ACLs) to the MOVIES share for that group, the RO permission to the other, etc. Then you don't have to update permissions on the files if you want a new user to have that permission, you can just do it through ADUC. Obviously, make sure your DNS, time, and all your other networking stuff is working. AD doesn't like those things to be broken
  24. That's what I thought. Like I said, the data isn't irreplaceable so it's not the end of the world, and I asked because I vaguely remember reading that what I'd just done was a bad idea. Thanks all.
  25. I'm in the process of moving things around to sort out some of my fragmentation across disks, and I think I may have lost some data. I excluded disk1 from the user share ISOs Via a SSH session, I went into /mnt/disk1/ISOs and did "mv * /mnt/user/ISOs" Blip, gone, I can't find the data. It certainly didn't move it all to another disk, as it came back in about half a second with a bunch of complaints about how the file doesn't exist. The data doesn't show up in either the /mnt/user/ISOs folder, the share, or in any of the /mnt/diskX folders. I realize I should have moved the data to another disk directly rather than via the user share thing, but is it gone for good? I can replace the data, but if I'm just looking in the wrong place and can run something to fix it, any help would be appreciated.