statecowboy

Members
  • Posts

    112
  • Joined

  • Last visited

Everything posted by statecowboy

  1. I just disabled my network adapter (was assigning static mapping to all my devices and wanted to turn off the adapter and back on so it would grab the static mapped ip) like an idiot. Right after I clicked disable I realize it was a goof. Anyway, is anyone aware of a clever way to re-enable the adapter so I can communicate with the VM again? Rather not scrap it and build another if possible, but it's not the end of the world I suppose. It's a windows 10 VM for what ti's worth.
  2. Hi all, I was curious what others do as part of normal maintenance/cleaning activities for their servers. When I say cleaning, I mean physical cleaning. I keep mine in my basement and have purchased a compressed air blower (in lieu of cans). I just want to make sure I'm doing what I need to do to keep things running for as long as possible. I've got the norco 4224 chassis which tends to cake some dust on the drive bays.
  3. I tried deleting the old controller docker and reinstalling and I can't get to the web UI at all even with a fresh install. EDIT - Not sure how but my initial install must have just been messed up. Removing the docker, deleting the folder, and reinstalling did the trick...
  4. Hi there. I just installed a new AP Pro and went to the controller to add it, but the unifi webstie says my controller is offline and needs to be updated. I also can't seem to access my controller through the local site. Any ideas?
  5. Turns out one of my SAS cables unplugged itself from the expander. Didnt even think that would be an option. Thanks again. Am now reconstructing the drive to be safe.
  6. OK, thanks. I'll check the cabling out. How would I go about properly bringing the drive back online?
  7. Bought a few of the 8TB easystore drives last year. Array has been running with no problems whatsoever since the beginning. Just got a notification that one drive was take offline. Smart report attached. Time for a new drive? Not sure why it was taken off line and if it's safe to bring it back on. Array is running in parity mode now. WDC_WD80EFAX-68LHPN0_7SGENLWC-20180724-1703.txt
  8. I used it for a bit, and it is unlimited, but the upload speed was excruciatingly slow.
  9. I was going to try and answer your question about pointing your plex server to cloud based files, but deleted it because I'm not sure what the current answer is (obviously there is plex cloud, but I dont think that's what you're after). That said, I've heard that google runs hash checks against known pirated copies of material out there, so if you're media are all legit copies of content you own you should be set to at least run plex cloud. If it's not I don't think it would be a good idea. I honestly don't see a lot of value in running plex cloud in the first place. I suppose it does allow you to rely on cloud providers' servers to power your plex experience, but at the end of the day it's far less powerful than running your own and (tinfoil hat going on) it gives cloud providers a pretty in depth look at your content and habits. I'm solely using my cloud backup as just that, a backup.
  10. I use google G suite. Yes, it's supposed to be limited to 1 TB per user until 5 users (at which point you get unlimited), but I rolled the dice and am getting unlimited at the moment. I've got 14+TB stored. I use rclone to run a daily script and sync the folders I want (and encrypt my media to avoid prying google eyes). G suite is $10/month (per user - I only have one user set up).
  11. Hi, I'm asking out of curiosity, not trying to argue or anything, but why do you say this? I have an H110i I use on one of my machines and it's worked just fine. Rig looks slick BTW.
  12. Hi, I'm curious what issue you're having. I've had no issue with duplicate files. I do have a friend who had some issues with media files, but I assume it was caused by dockers handing files off between themselves improperly. He used a docker called dedupe I think to take care of that issue.
  13. No judgement here.....this was my first "server" build so I had to learn all of that as well over the last few months. With the Norco 4224 chassis you have 6 backplanes. Each backplane has a SAS connector and a molex power connector (it appears with the norco chassis there are different configurations out in the wild, some with multiple molex connections etc - however, mine has one sas port and one molex port on each backplane). From that SAS connector you can do one of a couple of things. One would be to run a reverse breakout cable from the sas connector to multiple SATA ports on your MOBO. Assuming your mobo doesn't have a ton of spare SATA ports, you are looking at doing what @saarg recommended, which is getting into a SAS controller card and a SAS expander. This is the route I went. I have an LSI 9210-8i sas controller flashed to IT mode (basically this gets rid of the raid functionality of the card so your system just sees drives and you decide what to do with them) which is connected (with 2 sas to sas cables) to an HP 24 drive SAS expander card. From the SAS expander card I run 6 SAS to SAS cables from each of the ports on the expander card to the backplanes. For what it's worth I bought the sas controller and expander for about $60 for both (used on ebay). As far as "file server", unraid has built in share functionality. So, you can just set up SMB shares with whatever share folders you decide you want to share over your LAN. There are also various dockers that give you additional functionality, like nextcloud, which allows you to log in to your server and share files similar to dropbox.
  14. What do you mean by file server (LAN shares, cloud shares, etc.)? You can create shares for whatever you want, and not need to pass those shares to your file server (or just turn on smb shares to share whichever shares you want over your LAN). My personal setup is I have a share for movies and tv which are passed to plex. I also run nextcloud as a file server of sorts which allows me to add, share, download, upload etc files to/from my machine (I have a dedicated nextcloud share, but also pass my other shares to it so I can use them). You can pass whatever shares you want to the file server. Depending on what file server system you're talking about, I don't see any value in moving all of your media into one big folder. As for a chassis, I have a Norco 4224, purchased from Newegg. It has 24 hot swappable bays, plenty of room for future expansion. Quality control and support are basically non-existent with Norco, but my chassis works perfectly.
  15. Hi guys, I have an SC2600CP board with two NICs. Having a board with two NICs is fairly new to me, so I'm looking for some help here. For some reason one of the NICs always registers as 100 Mbps, while the other is 1000 Mbps. Usually this does not cause any issues. However, on occasion, and currently, unraid bonds to the 100Mbps connection. Is there a way I can force it to use the 1000 Mbps connection? Not sure if this is a BIOS thing either. I can access the motherboards integrated web BMC console when it's plugged in, which is pretty helpful. I believe both NICs are 1 Gbps anyway, so I'm not sure why one gets assigned 100 Mbps. EDIT - please disregard....it appears the cable I was using was bad. Both NICs are registering as 1000 Mbps now.
  16. I ended up solving my issue with some help on the rclone forums. Apparently you must create the director you want to secure after encrypting the remote. In my case since I had google:secure, I need to create the secure directory. This was done with the "rclone mkdir google:secure" command. All is well now.
  17. Hi guys, I've managed to get things set up just fine with google drive, but I'm having an issue where rclone does not seem to create my ecrypted directory, in my case named google:secure (google is my google drive remote). I don't get any errors, but when I run "rclone lsd google:" from command line I see all my folders except for the encrypted folder. I've done a bit of searching and I don't see anywhere that states google drive does not allow encrypted folder creation. Any ideas?
  18. Duplicati is great, but I prefer for my backup to be an actual replica of my data on the cloud rather than a back up comprised of a number of compressed files. With rclone simply cloning the contents of your array along with google drive's sharing capabilities it make sharing family photos, videos, etc. very easy. Edit - I think perhaps setting duplicati to "no encryption" would do what I've described above? I may give that a shot.
  19. That is correct with rclone and google drive as well, you're right. It's just something I need to remind myself to be mindful of. You can use the copy command as well, which avoids this.
  20. I thought I would share my experience trying to set up a cloud backup solution for my array. It has not been as easy as I had hoped for. I initially had planned on using crashplan for small business. I'm now with G Suite (Google Drive). With both of these options there are some issues that people need be aware of. Crashplan Pro (AKA Crashplan for Small Business) - Cost is $10/month for unlimited backup capacity. Setup is relatively straight forward using the Crashplan Pro docker (I used @Djoss's docker and it worked great). However, I was experiencing very slow uploads, and after discussing with their support they informed me that users should experience 1-5 Mb/sec upload speed. This was unacceptable to me, as I have a lot of data and the initial upload would have taken 90 days according to the crashplan app. G Suite (Google Drive) - Signing up for a G Suite business account requires a domain. I already had one so that was not an issue. Getting everything set up is relatively straight forward (you must confirm you own the domain by adding records through your registrar). G Suite also gives you access to business email (with your domain - [email protected]) which is a nice perk. Storage is supposedly limited to 1TB per user up until you have 5 users, at which point it becomes unlimited (in other words to get unlimited in theory you would need 5 X $10 = $50/month worth of services). However, as a lot of folks know, this has not been enforced, and people with just one account have been getting unlimited storage. I found this to be the case in my situation as well. My storage limit shows as Unlimited. I am using rclone to do the actual sync'ing and it's been working beautifully. I used Spaceinvader ( @gridrunner) 's video for the setup (love his videos): All that said - I was getting very fast upload speeds (basically saturating my upload bandwidth on a gigabit connection). When I woke up this morning however, I had received an error from rclone telling me "Error 403 - User Rate Limit Exceeded". I contacted google support and they informed me that they enforce a 750GB limit per day on uploads. So, it looks like I will need to run my sync scripts a few times during the first couple of weeks to get everything properly uploaded after google resets my upload limit each day. Of note, they also told me that during the trial period they limit you to 750 GB total upload. So if you are still in trial, your upload limit may not reset. In my case, I asked that I forego the trial and pay now. I'm sure this information is probably common knowledge, but I figured I would share in case it helps someone trying to figure out a good cloud backup solution. These things seem to evolve quite quickly so this is my experience with it as of this date.
  21. Great, thank you @Djoss. One other quick question, I have about 9TB of data. At first it backed up the first 500GB or so in a day or so. Now it's showing anticipated completion of 78 days. I've tried optimizing my settings according to crahsplan's recommendations to allow it to use more bandwidth but it doesn't seem to make a difference. Is this common? EDIT - I got in touch with crashplan support and they stated "With that being said, we are a shared service, so we are not able to take advantage of your network's full bandwidth. Our upload and download speeds are also influenced by our encryption and de-duplication procedures. We expect our customers to experience speeds in the 1 to 5 Mbps range. " So, looks like it is actually going to take that long. Yeesh.
  22. Thanks! That actually did the trick. I can now see storage under the manage tab. Curious though, it looks like I am backing up other paths as shown in the attached. I only need storage, correct? (only really care about the array - at some point in the future I may add my dockers (on an unassigned device)).
  23. Hi guys. I have another error in my memory which I've added below to explain my question above (I've also attached my last diagnostics). From unRAID logs: Feb 20 11:30:45 someflix-unraid kernel: EDAC sbridge MC0: HANDLING MCE MEMORY ERROR Feb 20 11:30:45 someflix-unraid kernel: EDAC sbridge MC0: CPU 0: Machine Check Event: 0 Bank 10: 8c000047000800c1 Feb 20 11:30:45 someflix-unraid kernel: EDAC sbridge MC0: TSC 160afab75fbec Feb 20 11:30:45 someflix-unraid kernel: EDAC sbridge MC0: ADDR 142592000 Feb 20 11:30:45 someflix-unraid kernel: EDAC sbridge MC0: MISC 908400800080e8c Feb 20 11:30:45 someflix-unraid kernel: EDAC sbridge MC0: PROCESSOR 0:306e4 TIME 1519147845 SOCKET 0 APIC 0 Feb 20 11:30:45 someflix-unraid kernel: EDAC MC0: 1 CE memory scrubbing error on CPU_SrcID#0_Ha#0_Chan#0_DIMM#0 (channel:0 slot:0 page:0x142592 offset:0x0 grain:32 syndrome:0x0 - area:DRAM err_code:0008:00c1 socket:0 ha:0 channel_mask:1 rank:1) From BMC Web Console: Event ID Time Stamp Sensor Name Sensor Type Description 22 02/20/2018 17:31:40 Mmry ECC Sensor Memory Correctable ECC. CPU: 1, DIMM: B1. - Asserted someflix-unraid-diagnostics-20180221-1837.zip