Phastor

Members
  • Posts

    78
  • Joined

  • Last visited

Everything posted by Phastor

  1. I have a top level share called "backups" that I keep copies of my game server saves in and the Veeam archives for my workstations. That share is allocated to cache only, so based on what I heard here, I decided to put a tmp folder under that and map /tmp to that. Next question: I did a test run and the first thing I noticed were the tons upon tons of 50 MB zip files going onto my external backup drive. I did some reasearch and learned about remote volumes. From what I understand, the default size of these volumes were set to 50 MB with it in mind that the smaller sizes would be easier to move up and down to cloud services. Given that I eventually fill my 8 TB and am backing it all up, that puts it in the ballpark of around 160,000 of these little guys sitting on that drive. I don't intend to push my backups to the cloud, and instead plan stick with a couple USB drives that I rotate so that I can have one off-site. With this in mind, should I set the remote volume size to something larger? The majority of what I'm backing up is audio and video if that's a factor. And the same as a couple other people have stated here, I'm here now because of Crashplan deciding to crap on their home users.
  2. Where's a good place to map /tmp? Does it get large? Would I be able to map it to something like /user/appdata/duplicati/tmp without an issue?
  3. Thanks again! I'll look through that. Whenever I have an outage, I usually wait a good long while after power has come back up before I bring the server back online for that reason. I actually forgot about this thread and my UPS research until we had a power outage last night. Currently having to suffer through a parity check reminded me really quick, so hopefully now I'll get into gear and get that thing ordered sooner than later.
  4. Thanks for the info! I looked up the BX1000G, but it doesn't appear to be available on Amazon. I'm looking at the BR700G right now. Doesn't have as much capacity, but more fits my price range. Looking into whether or not it will display all info or not.
  5. Gotcha aivas-diagnostics-20170808-1724.zip
  6. I understand and expect about half write speeds with a parity protected array, and I was really happy with my results when I first built my server. I was getting 50-60 MB/s, which was pretty much right on the mark of what people said to expect. However, I'm running into some weirdness over the last week or so. My writes seem to jump all over the place anywhere between 5 and 50 MB/s--usually hanging out more around the 5-10 mark. I can't find any rhyme or reason to it. I think I've ruled out network since I get the same results when copying from a Windows 10 VM that's sitting on cache in the same server. My read speed also seems ok riding between 90 and 100 MB/s I'm not sure what I've done different. I know this isn't much to go on, but it's all I've got at the moment. I guess more than anything I'm looking for suggestions on how to go about troubleshooting this. Any thoughts?
  7. I'll take that as a no. Guess I'll be paying up to Crashplan.
  8. I started using Crashplan because as far as I know, it's the only solution that doesn't involve rsync or the terminal in one form or another. While I would liked to use the terminal more often and become more familiar with it, I would prefer my backup solution to be easy and something that I can set and forget and not say "How the hell did I do this?" when I come back to make changes to it months later. I was under the impression that the only thing you needed a Crashplan subscription for was their cloud service. I was ok with this--I can't use cloud backup anyway because my ISP has graced me with a 400GB monthly data cap. The only thing that the Crashplan software warned would go away after the trial expired was the ability to backup to Crashplan Central. However, my trial expired the other day and lost a lot more than the cloud service Advanced settings have been disabled, so I cannot control things like deduplication or compression. I can't even view them to see if they went back to their default settings or if they stayed at what I set them at. I had compression turned off since it slows backups way down and most of my data can't be compressed anymore anyway. Backup sets also went away, which is critical since I had two sets; one backing up the /user folder to one USB drive, and another backing up the direct disk paths to a different USB drive. I did this so I had more restoration options. If I needed to restore just a file here or there, it could easily be done from the /user backup. If I lost a whole drive and parity failed, I could use the /disk backups to restore a whole drive, knowing I got back everything that I lost without having to know exactly what was on that drive, or having to restore the full array just to get back that data when all the other drives were just fine. Now I can no longer do this because I cannot add sets or even modify ones that already exist besides the first one. I really don't want to pay a $60/yr subscription for just a few features (even if they are critical) when the value of that subscription is all in the cloud backup portion, which I can't even use. Are there any other easy-to-use backup options for unRAID that even come close to Crashplan? Something comparable to Crashplan, but doesn't necessarily have to have cloud options since that's not something I can do anyway. Side question: Even if I can't see or control it now, assuming deduplication is left on by default in Crashplan, would that mean I can add the /user paths and the individual disk paths in the same backup set? Would deduplication prevent the redundant data from being written to backup multiple times? Could something go wrong with doing this? Assuming I can do this with no negative consequences, I might be able to stick with Crashplan since that would only require me to have a single backup set, which I could still rotate multiple backup destinations on. It seems like I can still have multiple target paths for my backups--just not multiple sets. I would still be stuck with having compression forced on, but I could learn to live with that.
  9. I remember reading something a few weeks ago about UPSs. Someone had a UPS that worked with unRAID's APC daemon and it would shut down the server properly during an outage, but it would not display the UPS load or runtime or anything while running on AC power. It was explained that the UPS that this person had lacked a certain feature that was required for this information to be displayed. Now that I am actively shopping for a UPS, I want to keep a watch out for that feature, but I don't remember what it's called and now I can't for the life of me find that discussion again. Does anyone know what I'm talking about? Furthermore, can anyone confirm some UPS models that will display all info within unRAID?
  10. For some reason, Crashplan isn't backing up on the schedule I've given it. I'm aware that there isn't a way to tell it to run a backup "at this exact time," but I wanted it to run a backup at around midnight. I've set the "Backup will run" setting between 12:00AM and 3:30AM. I've noticed so far it has not backed up during this window at all. It's backed up one time outside of that window and now there hasn't been a backup at all within the last 24 hours. Under the summary it does say that backup is scheduled to begin at 12:00 AM. It also says that a backup was done in the last 11 hours, but I find that hard to believe since I don't see any files in the backup that I've uploaded to my shares before that time. I've included a shot of my schedule and frequency/retention policy. Am I missing something?
  11. Yup! This is what I was thinking. I've got appdata and my flash already backed up on a nightly schedule. I'll remove the docker image from backup then. As far as VMs, I don't have any yet. I'll probably spin up an ARK and Minecraft server sometime down the road. I might not map their data to an unRAID share directly, but I do plan for them to back up their world files on a regular basis somewhere to a share, which will then get picked up by Crashplan. I wouldn't backup the VMs themselves then since it shouldn't be too difficult to spin up new ones and restore their data.
  12. Straight and to the point. Thanks! Can the same be said for the libvirt image and pretty much the system share as a whole?
  13. Since it's always changing, it gets flagged for backup every night by Crashplan. At this rate, at 20 GB a night, I'm going to have over 600GB in versions of the same file in a month's time. It also extends my backup time by some margin. Does this even need to be backed up? We all already know that settings, configs, and other data for dockers are in appdata, and from what I can tell, templates for previously installed dockers are stored on flash. I'm already backing up both of these locations, so really if I were to lose my cache drive, I could theoretically rebuild all of my dockers from those templates and appdata really easily. That being said, is there any reason at all why I should be backing up the image?
  14. I pre-cleared my two 4TB Reds the other day simultaneously, so I'll verify this.
  15. Got all my data transferred over to the new server and running my first backup. It's targetting a USB3 drive. Right now it's reporting to run at around 100Mbps, so roughly 10-12 MB/s. That seems to be a bit on the slow side. I'm using a PCIe 1x USB 3 card since the motherboard I use doesn't have it. While the PCIe 1x bottlenecks USB3 throughput, it still should be plenty to max out a spinner. My guess is that unRAID either doesn't support 3.0, or it's just not recognizing the drive as such. I know that Crashplan does compression, and because of that I wasn't expecting 100 MB/s speeds. Though I wouldn't expect it to drop transfer rates this low. Any ideas?
  16. Just some more observations while playing around with this. I started a clean backup of /mnt/user and noted the size of the backup. I then added /mnt/disk1 and /mnt/disk2 and ran the backup again. The backup was almost instant and the size didn't go up at all, but the file's in the backup were present under both /mnt/user and the individual disks. I'm assuming that's deduplication at work. Knowing this, I could theoretically back up /mnt/user and the individual disks at the same time without the backup being any larger. This would give me the option to do a full restore of /mnt/user or restore only the contents of a failed disk, whichever the situation called for. However, this just seems like a lot of hassle to make sure that each disk has the right folders selected. There's also the issue where if I were to move files from one disk to the other with unBalance or MC. Crashplan will mark the files as deleted on the original disk even when they were just moved. I'm afraid this would cause those files to be deleted from all locations within the backup when it does the cleanup by removing deleted files. I think that's what was happening to me before where I said files weren't where they were supposed to be after moving them. I think I'm still going to stick with just backing up /mnt/user. Seems to be the safest route despite the issue of having to restore everything if I lose just one disk.
  17. Apparently it's been a requested feature. Hopefully they add it sometime. It's a basic function that I don't think I've ever seen missing from a backup utility before.
  18. Yeah I think I'll be doing the same. I've been testing backup and restores on the individual disks, moving files around with unBalance and whatnot and then backing up again to see how it behaves. Files that should be in certain restore points on certain disks aren't there and all kinds of weird things are happening. Too scary. I just wish Crashplan would let me do a blanket restore over everything while only restoring files that are missing. It just seems like a ton of unnecessary wear and tear on the drives and a lot more time consuming to be forced to do all those writes for files that are already there.
  19. I've been getting some bits of info from /r/unraid and thought I would run by some thoughts here. I'm still waiting from some more items to come in before I can do my build, but I'm seeing that as a good thing as I'm still testing things out with the trial and learning what I can. When I get my build done, I'm going to be running a 3 disk array, 2 data 1 parity, and plan on backing that up onto an external USB drive. A second drive will be added later so I can rotate them to have an off-site backup. I would back up all shares except for a "backup" share that my workstations back up to. I will likely be doing the backups with the Crashplan docker. Initially, I was planning to back up the shares in /mnt/user so I could get everything across all drives, with the exception of a "backup" share that my workstations will be backing up to. However, I'm thinking of the event of a drive failure, where parity fails as well. Theoretically, that drive will be backed up, but I won't know what data to restore since I won't know exactly what I lost on that drive. I thought I could use Crashplan to do a blanket restore of everything and tell it to not restore any files that already exist. This way I would essentially only restore what disappeared when the drive died. However, the only restore options Crashplan has for existing files is "Overwrite" or "Rename". It doesn't seem like there's an option to tell it flat out not to restore anything that already exists. I don't want it doing that unnecessary work of overwriting files that already exist. It was brought up on /r/unraid that I could have Crashplan watch and backup the actual individual disks instead the shares (/mnt/disk1, /mnt/disk2, etc. instead of /mnt/user). That way if a disk dies, I can just restore that whole disk and know that I got back everything that disappeared when the disk died and not have it go through trying to restore stuff that's on the good disks. Further, in order to exclude the "backup" share that my workstations back up to, I would just have to tell Crashplan to exclude the folders tied to that share on each individual disk. Is this a safe and sound way to do things? They seem to think so over at /r/unraid--I just want to pull opinions on it from multiple sources.
  20. That is correct. The good physical disk is part of the array. I understand that the write speed is about what is expected. However, what I was surprised about is that it wasn't less. Since I was getting such poor read speed on the bad disk and the parity emulated image of that disk during the copy of it to my live server, I figured MC would have been bottlenecked by that read speed as well while trying to move it to the good disk. It's that inconsistency that confuses me. Also, when I say "live server", I mean my current Win2k8 file server VM that resides on a different host. I don't think I clarified that in my previous comments. Sorry if that caused confusion.
  21. It looks like I'm getting a crash course on disk failure in unRAID this morning. Over the last two days, I've been letting the Handbrake docker encode a movie (Test rig is running on an old dual core Athlon II x2, which is why it took so long). Mostly just fooling around and testing it. It finished last night apparently, but when I got up this morning and checked in on it, I had a lot of reallocated and offline uncorrectable errors on the disk that the movie was sitting on (ok, so maybe the encode time wasn't entirely the CPU's fault). I knew this disk already had a few errors when I started, but since this was just for testing I used it. I started to copy the newly encoded video to my live server, but wow! Less than 500 KB/s transfer, which eventually just failed. Guess this disk is way worse than I thought. I yanked that disk out (keep in mind this is me just fooling around and I would never be this ballsy on my live server) thinking maybe I would get better read speeds if I wasn't trying to read off a bad disk, but instead read off a parity emulated disk. This also gave me a chance to see what the performance was like on a degraded array. However, trying to move the movie over again I pretty much had the same result. Here's what I don't understand though. I used MC to move the movie from the emulated disk to the remaining good data disk and I got about 30 MB/s while moving it that way. I kept the directory structure and ran the permission script to make sure I wouldn't have issues with the share afterwards. I then started to move the movie over to the live server again. This time it started out at about 2 MB/s, but over the span of about 5 minutes it slowly creeped up to about 10 MB/s, which is what I would expect given the test rig is currently limited to 100Mbps network bandwidth. The movie did successfully copy this time. I understand the poor read performance on a failing drive, but what I don't understand is the poor read speeds from the emulated drive. I understand that that the array is degraded once I removed the faulty disk and that performance isn't going to be great, but I didn't expect it to be that bad. What I further don't understand is the inconsistency in read speed between copying to my live server and moving from emulated drive to good physical drive with MC. I'm not blaming unRAID for any of this. Faulty hardware is faulty hardware. I'm just trying to wrap my head around the numbers that I'm seeing. Can anyone clue me in? Edit: To clarify, when I say "live server" I mean my Win2k8 file server VM on a different host.
  22. I'll definitely check out that Community Applications plugin. Thanks!
  23. Looking further into this, I think I'm going to want to add a cache drive as well. my test rig is currently connected via a Cat5(no E) cable at the moment and is limited to 100Mbps because of that, so so far I haven't gotten an accurate impression of write speed. I also read up some on turbo write. Since mine is going to be mostly a media server and won't get written to very often, I don't think having all drives spin up during writes will be that big of a deal for me. I've read the pro's and con's of both, but can I hear what some people have chosen for themselves and why they chose one over the other for their use case?
  24. Thanks for the info! What do you use to push the incrementals to the Windows box? On the topic of adding drives, I just realized this motherboard only has five SATA ports. Factoring in the optical drive, that leaves me with 1 slot available for future expansion unless I bring back the HBA or get a new motherboard. I'm positive it's going to be a long time before I'll need to expand again, but I'm just thinking of the distant future. If I do move to another board, assuming I use the same flash with the existing config intact, will unRAID be able to identify the drives in the new system and correctly add them back into the array?