SnickySnacks

Members
  • Posts

    105
  • Joined

  • Last visited

Everything posted by SnickySnacks

  1. Howdy SnickySnacks, do you feel this may be a general answer to the problems a number of users are having accessing shares from other devices? If so, I'd like to post it more visibly. And can you add to your post how new users would restart Samba? I'm not sure it's a general answer (I was able to connect on Windows 10 in private mode regardless). It likely only affects devices that use the older NTLM1 authentication. My debugging steps went like this: Added to samba.conf: log level=2 syslog=3 reloaded config from telnet: smbcontrol smbd reload-config Saw in the syslog: "NTLMv1 passwords NOT PERMITTED for user" This indicated that my device was using the older protocol which was likely disabled for security reasons. Adding ntlm auth = yes then: smbcontrol smbd reload-config allowed the older authentication. This shouldn't make a difference for like windows 10 (which people seem to be having a problem with), but some devices that don't support NTLMv2 may require it. From the smb.conf manpage https://www.samba.org/samba/docs/man/manpages-3/smb.conf.5.html Looks like unraid or a more recent samba version is defaulting this to "no".
  2. I was initially going to post that my DuneHD was working, but I realized I had my shares set to "secure" not "private". To get it working in "Private" mode, add ntlm auth = yes to your samba.conf in settings -> SMB and restart samba.
  3. It really comes down to what you're trying to do. One of my friends has unraid set up with a subsonic docker that's exposed to the public (password protected) so he can stream his music to his phone/car/whatever. Having a VPN set up is good for accessing your network, accessing files as if they were on the local net, etc but of course requires you to have the VPN set up on every device that needs access. A split tunnel VPN should let you access your shares more or less exactly as if you were at home. I use TeamViewer personal for assisting friends/family with computer issues and I've found it to be a nice, secure way of accessing PCs but you wouldn't want to, for example, watch video over it. But for doing general maintenance, transferring small files around, etc, it's easy enough.
  4. VPN is the way to do this. The reasoning being that in theory you could expose single services and add authentication to them, but this requires that each exposed service be properly secured (and many do not support strong security. There's no safe way to expose many services that we take for granted as being "safe" inside our internal networks). VPN does the same thing but allows you to control authentication of all services via a single exposed port without needing to secure every port individually. If the server needs to have something generally accessible to the public (web service or something) then you expose it and secure it. If it's something you never want the public accessing, VPN is far safer than trying to corral multiple potential security holes.
  5. While I can't tell you how to do this anymore (it's been too long since I've used a setup like this), it is possible to set it up to only route traffic intended for your internal network into the VPN. Main benefit is you don't pass all your internet traffic over the VPN (which isn't necessary if all you're doing is trying to access your home network) so you get the performance benefit of normal internet traffic going out directly from your client and only traffic intended to hit your home network over your VPN. (this also lets you do things like access your home network and your work network at the same time, otherwise all traffic gets routed over your home network and you lose access to your internal corporate network) Google "vpn split tunneling" to look up information on this topic.
  6. It's probably not worth spending the money on a static IP if you don't need 100% reliability. A free dynamic DNS provider generally uses a program on a computer inside the network to update the external DNS every hour or so, then you connect to your external ip with like myname.freedns.com or whatever, which will return your current IP. Your home IP address generally doesn't change all that often so this works fine for personal use. As Ashman70 mentioned, it's a good idea to verify with your work whether or not outgoing VPN connections are permitted as unmonitored outgoing connections are often considered a security risk in a corporate environment and may be a security violation. This, of course, is dependent on your specific workplace rules.
  7. block them server side with samba config under settings -> SMB -> samba extra config: veto files = /._*/.DS_Store/.AppleDouble/.Trashes/.TemporaryItems/.Spotlight-V100/ delete veto files = yes
  8. I can't comment about the hardware, but I set up a home server for a friend to backup to crash plan recently. The current version of the crash plan docker is very simple to use, just set up the docker along with mapping the shares you want to back up, then set up crash plan via the web ui to back up those shares. Overall I think it took maybe less than 15 minutes to set up. Now his server is fairly small (1 TB) so if you are planning on backing up a huge amount of data I can't really comment on the time it will take, but it's very quick to test with the crash plan docker from community apps.
  9. In this case the only reason to pre clear it would be for testing it. If you are already happy with your level of testing, pre clearing wouldn't add anything as the whole disk will be overwritten anyways when the system rebuilds the missing disk onto it.
  10. Do you find it more reliable if you use a cabled connection? I only use it to back up my work laptop so I never bother to bring the ethernet adapter home. Given that backing it up takes hours anyways, leaving it to run overnight once a week is no big deal. Once it successfully mounts (or whatever) the time machine share, it works fine doing its hourly backups until I disconnect it and bring it into work for the week. I will note that I always connect via name ("tower-AFP") and if I care, I can tell when the time machine backup is ready to work because I can't browse the sparse bundle file in the finder until it's done mounting (or whatever it's doing). My issue is that once the share is mounted, unraid will continue to spin up the drive the share is on for 24 hours before the afp connection times out and the disk is allowed to finally spin down, even if my laptop has long been disconnected. I mentioned this issue a while back here: http://lime-technology.com/forum/index.php?topic=41729.msg396720 But nobody seemed to care and it wasn't a real big deal to just let it timeout once a week.
  11. My personal experience is that it can take upwards of an hour for my time machine share to come online on my macbook pro after connecting it to the network (TM share is a user share on a single disk). While this is happening the first one or two TM backup attempts will fail. I usually just connect it via wifi and then leave it on overnight to do the backup.
  12. I block a few more items than this, but similar: veto files = /._*/.DS_Store/.AppleDouble/.Trashes/.TemporaryItems/.Spotlight-V100/ delete veto files = yes
  13. As one of the people waiting (with a spare parity 2 drive ready to go) for the stable release version, i just wanted to thank everyone for their hard work and time at finding and fixing the bugs in the beta version. I appreciate it.
  14. For the AMD-Vi, yeah we had updated the bios back in August when we originally set the box up (and had all the issues). In February, given that we didn't need the VMs and we were just trying to recover data, we haven't gone back and looked for new updates. I'm not even sure going forward how much we need AMD-Vi support. If it becomes an issue, probably better to just replace with server grade hardware, ECC RAM, etc... Subsonic and Crashplan both defaulted to weird places. One was /opt/appdata and the other /mnt/SSD or something. Neither of which survived a reboot. Wasn't a big deal, just caught us by surprise since the instructions weren't clear about what that needed to be. Subsonic falls into the category of apps that try to talk to the docker IP under one specific circumstance. When connecting via the (custom subsonic) URL to the subsonic server, if you are outside the local network you get the proper external IP. If you are internal to the local network you get the 172 (internal docker) address instead of the local network's 192 address. Next time I am over there I will play around with the network settings. I'm pretty sure network=host will fix the issue. As of a day or two ago, crash plan finished backing everything up so no more worries on my friend's part.
  15. So, back in August or so I had a friend who was interested in setting up an Unraid build to test out VMs and stuff. I'd long been championing having a server set up to store backups on and for general storage. I have a ~18TB Norco in my closet I use for media streaming that I've been using for a few years now and have come to appreciate the utility of having dedicated storage with built in docker/vm support. At that time, they had an LG N2A2 NAS application set up in raid 0 format (I bet you can see where this is going) to store pictures/music and a laptop running subsonic and whatnot to handle out of home streaming. This of course resulted in many calls back to the house to "check if the laptop is on" when subsonic stopped streaming. I of course extolled the virtues of putting everything onto Unraid dockers with automated backups, parity checks, etc. They set up a small proof of concept server with a few spare drives but ended up not really doing anything with it until a few weeks ago when their LG NAS melted down. Of course being raid 0, if they weren't able to access the data via the NAS there would be no way to get it back. They did have backups but since they were manual they were weeks out of date. Suddenly, Unraid made a lot more sense to them. Cue the frantic call from my friend to come over and help rescue their data, move it to the Unraid server, and set up crashplan backups. Fortunately, we were able to get the NAS to come up sporadically so that we could pull data off it and we managed to rescue all their data to Unraid. What went right: Unraid was of course easy to set up, configure, and get working in an afternoon. Adding functionality through dockers was super simple, despite a few hiccups. In the end they now have a server that monitors itself, backs itself up, and is much easier to recover if it fails. What went wrong: Despite being advertised as having AMD-Vi support, the motherboard we chose for the PoC build did not work with this enabled. It would cause many weird hardware errors, lockups, and reboots and we had to eventually disable it to get things working. Once disabled, everything was smooth. The Limetech Docker Guide never fully explained that the config for dockers had to live on an actual drive. While this may be obvious, many dockers default to like /opt/appdata, which we just took to be the correct path for configs. This directory disappeared and was recreated when we rebooted the server. Of course, it was trivial to just re-set up the docker config on the cache after that reboot, but it was a very unwelcome surprise that it wasn't made explicit anywhere that the default paths for the various docker configs wasn't "correct" out of the box. It's obvious when it came to the path for media, but not for config. The subsonic docker reports an internal ip address in the 172.x.x.x range due to how docker hands out internal ips, which results in subsonic not working when accessed from the app at home. (https://lime-technology.com/forum/index.php?topic=40878.0). I suspect the answer is to change the docker net=bridge to net=host, but I haven't had time to mess with it, since it's otherwise working. Purchased a $10 usb wifi adapter thinking it would allow us to hook the server up wirelessly. Realized later that Unraid doesn't have wifi support, which is a shame. Tried building the driver to support it but without rebuilding the entire kernel it was a no-go. Decided it wasn't worth the hassle. Purchased hardware: 2-core A6-7400K Black Edition Boxed Processor + FM2A88M-HD+ Socket FM2+/FM2 mATX Crucial Ballistix Sport 4GB DDR3-1600 (PC3-12800) CL9 Desktop Memory Module Kit (Two 2GB Memory Modules) NZXT Classic Series Source 210 Mid Tower ATX Computer Case Kingwin 120 x 120mm Long Life Bearing Case Fan 2 3gb WD Blue HDDs 8gb sandisk cruzer fit Reused hardware: Corsair 750W non-module power supply (so many wires ) Total hardware cost was about $160 w/o power supply + $160 for the 2 drives. Peace of mind concerning your data's safety? Priceless.
  16. So the IOMMU problem is just with the onboard nic? Good to know. On the MB we were using we didn't have another way to check so only option was to just disable IOMMU.
  17. I had something similar with a test system I was setting up. Seemed to be a faulty AMD Vi (IOMMU) implementation on the MB or something. Disabling it in the bios seemed to correct the issue. Perhaps try that?
  18. I don't know if this still holds true, but there was some talk about not using kingston ram in the x10 boards. https://forums.freenas.org/index.php?threads/ram-recommendations-for-supermicro-x10-lga1150-motherboards.23291/
  19. He has split level=1 and is copying to the root of the share. It's likely putting the file onto whatever disk all the other BDs are on, no? (My thinking is the split level affects where the directories are created, not the files, so the root directly is probably all on one disk?) To be honest I have no idea how split levels work when copying files to the root of the share, but that seems to be the most likely culprit. Can you look in the share directory in the unraid menu and check the "Location" column and see if perhaps all the BDs you have copied are only on a single or set of disks that are now full? My logic goes: The average BD is about 30gb, your error screenshot shows 105 items in that share, so ~3TB is about right for 1 disk filled to the brim.
  20. I'm using the same make/model drive as my parity and I can confirm that it should be the exact same size as every other 4TB drive: parity WDC_WD40EZRX-00SPEB0_WD-WCC4EF2YL0NZ (sdb) 3907018532 * 4 TB One reason I wouldn't suggest just moving to a 5/6TB drive is that you may (down the line) run into the same thing once you start adding 5/6TB data drives, unless you fix the underlying cause.
  21. Well. Uh. I don't know what to say, but Norco is wrong. They even sell the correct cable for their case: http://www.norcotek.com/item_detail.php?categoryid=6&modelno=c-sff8087-4s http://www.newegg.com/Product/Product.aspx?Item=N82E16816133033 Note the part where it says: Read here: http://lime-technology.com/forum/index.php?topic=7003.0
  22. Zugman is correct, the cable you listed at the very top (the "fanout" cable) is a forward cable for connecting SAS -> SATA. You need a reverse cable for connecting SATA -> SAS, specifically one like the "NORCO C-SFF8087-4S Discrete to SFF-8087" that you listed.
  23. This is actually really easy. Set up a share for DVDs and backups that includes both disks. Then write the DVDs and backups directly to the disk shares. They will show up properly (combined) in the user share but you will have full control over where each DVD and backup ends up.
  24. I'll throw in my two cents here, too. Your drives are definitely running way too hot. As I recall mine don't get over 40 when running a parity check either. If yours are going north of 70 you have something off. You say your rear fans are blowing out of the case, are the 120s in the middle blowing to the rear? Do you have your server in a closet or something? I have basically the same setup and I would panic if I saw a drive top 50, much less 70.