Skraut

Members
  • Posts

    23
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Skraut's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Yeah, it isn't fast. I kept restarting one of my local backups trying to figure out a way to make it faster. Tried SSH, WebDAV, and the next step was to do it to just a disk mount, but we'll see. Doesn't help that the first machine I'm backing up is an ancient macbook that I'm afraid will die any day and Mono + Duplicati want all the resources. Wasn't sure if the issue was networking, or just that system not being able to encrypt/compress the files fast enough.
  2. This seems like a nice tiny base container with what you may be looking for. https://github.com/panubo/docker-sshd It's not using similar layers that some other unraid specific containers would use, but its Alpine, so it is tiny. I don't run this one, but do run some other Apline based images on my server with no issue I faced the same issue as you, and thought about going down this route, and ended up just installing NextCloud. This gave me the ability to host a Dropbox like service for myself and family. I then had each user create a folder in their NextCloud instance called Backup which they don't sync back to their machine. Then I use Duplicati clients and the WebDAV protocol to send the data to that Backup directory. I don't know the performance impacts of using WebDAV vs sshd (my guess is the simplicity of sshd may make it faster) but a simplified web interface for administering users has been nice, and my parents have already been taking advantage of some of the Dropbox like features
  3. So, here's were I'm at so far. I set up NextCloud and the family has started to backup to it using Duplicati clients over NextCloud's WebDAV protocol. I could probably have them SFTP or FTP the backups to me but didn't want to deal with setting up those accounts to my server, and they'll probably use some of the other features of NextCloud now that it is set up. Now I just need to figure out what to do next. Either use Duplicati on my Unraid box to send all that data to Backblaze B2, or just build another Unraid box, have it hosted at another family member's house, and distribute the offsite several places.
  4. There's a pretty good image out there already for Duplicati, which I installed the other night. https://hub.docker.com/r/linuxserver/duplicati/ I've played with it a bit, but haven't fully figured out the backup strategy, where to put things offsite. As a (former) CrashPlan Home Customer, I set up NextCloud on my Unraid box and have started testing Duplicati on my home computers, and those of my family to send the backups to the Unraid server using NextCloud's WebDAV protocol. (I could probably just use sftp, but I didn't want to mess with setting up all those accounts, and the other features of NextCloud will be useful as well) Once I have collected backups for my home machines I'm looking to use Duplicati the server to send that data off to Backblaze B2. Or maybe I'll just build another Unraid box for my parents, and we'll host offsite backups for each other.
  5. It could probably work, but I think of those more like dropbox, only keeping the most recent version of each file. I know my parents have restored older versions of files before (similar to Apple Time Machine) which I don't think those systems have (or maybe I just haven't looked at them in a while)
  6. Looking around a bunch, I may do something like Duplicati https://www.duplicati.com/ for the family, provided I can get it running in unRaid, then something like Backblaze B2 to send that to the cloud. It'd make my system the primary backup point, then dump all that to Backblaze, but maybe we can use the money saved by not buying Crashplan subscriptions to set up a few more unRaid systems amongst each other for more points of backup.
  7. I'm just trying to find out what'll happen to the client. I had recommended Crashplan to many different family members, who all had their home service, but then a large portion of my unRaid server acts as their secondary backup site. As I live 30 minutes from most of my family, they could stop over, and do their initial backups to my server while they waited forever for the slow upload speeds in our area to back up to Crashplan. Also had one instance where my aunt accidentally trashed 30GB of photos, and it was easier for her to visit, and restore from me than to wait for everything to download again from Crashplan. I still used, and enjoyed their main service, but the multiple backup locations was such a great feature for family who may not have been as technically savvy. I'm just trying to figure out how I can continue to provide (relatively) local backups, or if there is any other software that'll work as well as Crashplan. Or will the client to continue to work the way it does today for remote backups not to CrashPlan central? I mean I can hope, but they've screwed over so many loyal customers today, somehow I can't imagine that they'd still leave such a beneficial feature intact.
  8. That's definitely an improvement over what I was writing. Thanks for pointing out that it exists!
  9. I had been doing well keeping the appdata for various docker containers on a cache drive, and using the Cache Only option on the share. This let me keep the number of disks down while lots of things worked in the background. That is, it worked well until the cache drive died, and I lost everything. Yes it wasn't anything super important, but it was frustrating to rebuild Plex data, have to re-tell Crashplan where my drives were and so on. This time around I set up cache only once again, and built an rsync job triggerd via cron to copy things from my appdata share, to an appdata-backup share which does not use the cache drive. This lets me keep everything still using the cache drive only, but still have something to go back to if the cache drive dies again. Is there a way this can be done automatically in unRaid? Some sort of option for the cache settings to be Only with Backup, so that it would use the same share location, it would just be that the mover script would rsync things to the share on the main array, without deleting things off the cache.
  10. Running half a dozen docker containers on my box, things like Plex, Crashplan, BTSync, Hexo, Beets, NZBGet, Sonarr etc, I was running into a situation where when everything was up and running, and Plex was doing some transcoding, I sometimes would run out of memory. 99% iowait, and kswapd running, and free showing all the memory used, I decided to upgrade from the 4GB that was in the system. Catching a sale at Newegg, I grabbed 16GB, for only 15 dollars more than 8GB would be. Have everything up and running in the system, turned everything on, and it's using exactly 4.05 GB of memory. I knew the 16 was overkill, but I figured the system might use some of that memory automatically for caching of objects or something. Any thoughts on how to use that extra memory productively? The Motherboard/CPU isn't capable of running virtualization, so I can't waste use it on a Windows VM or something. I haven't found any settings in any of the configs in any of the apps to use the extra memory to improve performance, or cache more. I guess I could use it as a ramdisk, but don't really have anything I would need to ramdisk, as I already have a cache drive in my system. Yeah, the extra memory will probably never fully get used, but I was just curious if anyone had a practical use I hadn't thought about.
  11. I had everything up and running in beta 6, upgraded straight to beta 10 (probably a mistake, I know) and had the hardest time getting everything to work, was not able to import any of the configurations from my beta 6 setup, and had to manually read the xml from my old configs and re-create the new containers. While the vast majority of my containers come up, none can talk to each other, or access services that the other provides. So nzbdrone can't talk to sabnzbd, or plex, or nzbmegasearch, nor can couchpotato, which kind of makes all of them useless. Is there some security setting that was enabled which is preventing this action, or something I missed when I tried to recreate these configs?
  12. Skraut

    BTsync

    I see that the name of this docker container has changed to lonix/btsync. I tried setting this up through the docker configuration page plugin, and this is the output it generated. /usr/bin/docker run -d --name="BTSync" --net="bridge" -p 8888:8888/tcp -p 55555:55555/tcp -v "/mnt/user/BTSync/":"/btsync":rw -v "/etc/localtime":"/etc/localtime":ro lonix/btsync Yet the container refuses to start, throwing the following error: "Storage path specified in config file does not exist." What am I missing?
  13. Thanks all. System is built and currently preclearing the drives. I'd post pictures, but the ones already in this thread look a little cleaner. (It's amazing how quickly you can finish up a system build when a 2 year old wakes up from his nap and decides he wants to "help") I did put the power supply so that the fan faced away from the CPU, and this seemed to work pretty well. Since this caused all the internal cables to come out near the top of the PSU, it was easy to tuck the unused PCI Express power cables between the top of the PSU and the case. The question I have so far is that the system reboots into the BIOS on every reboot. I have to select the boot menu, In there are 2 entries for the USB drive. One is UEFI, the other is not. Selecting the UEFI version just causes the system to boot back into the bios. Selecting the non UEFI version of the USB drive causes unRAID to boot. Unfortunately the BIOS Boot order only list the UEFI USB Drive which is why the system boots into the BIOS every time.. Has anyone else had this issue?
  14. Thanks, wasn't sure if the paste was needed or not, but was planning on doing some other work to the desktop system, and I usually use custom coolers there, so I'll use it there.
  15. Thanks everyone for all your help, here's what I ended up with http://pcpartpicker.com/user/skraut/saved/3kgX Some notes: Caught the Disks and Case on sale, I was just waiting until I had the time to decide on the rest of the components. Caught the CPU on sale today, so that made up my mind. Memory came out of a desktop a year or so ago when I upgraded it to 16gb, I probably would have gone with 8gb in this, but can't argue with "free" ram. Just hope the large cooling fins on this memory fit inside the case. I realize there are only 2 hard drives, so initially it'll have only 3TB of storage. The main purpose of this system is to replace an aging arm based ReadyNas duo with fairly new 2TB drives in it, and to move all the plex hosting, and sabnzbd to this unraid server so that I don't have to keep a separate gaming desktop on 24/7. I'll clean off one of the drives from the ReadyNas, move it over, then repeat the process with the other drive. The system should then give me room to grow organically as my needs increase, which was the whole point of coming to unRAID initially. Thanks again, I'll post more when the remaining parts get here and I start to get this assembled.