Jump to content

Froberg

Members
  • Content Count

    35
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Froberg

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Sorry to disappoint everyone, but I simply haven't had the time to look in to this at all.
  2. A memory leak usually presents itself as memory usage steadily increasing over time. It's down to a programming mistake usually, so I'd gather log files and send them to the resilio people. Maybe the docker container manager. If it's acting as I described. Alternatively live with it and have it reboot the container at regular intervals via scripting - but it does sound like you're describing a memory leak. It sounds like it's not unloading data as it handles it.
  3. I set mine up like this. (see attached) You can always add another host path for more folders.
  4. Updated from 6.6.3 to 6.6.6. No issues. Also: \m/
  5. No not yet. I have to set aside time to do it, because I haven't done much scripting in unix before. I worry about screwing something up in production 😃 For now it's Total Commander from point A to B.
  6. I guess I didn't expect it to actually be as simple as that. I'll look in to user scripting now. Cheers Squid.
  7. Hi all search has failed me dramatically, so I apologize if this is covered elsewhere. Basically I'm fairly new to UnRaid having moved from xpenology. I used to have btsync on xpenology and my local NAS540 to keep a copy of my data on that box, too. It's a simple JBOD array with some drives and SMB shares. Easy peasy. Since xpenology failed and dropped my array, I had to recover from that box. Unfortunately one of the drives failed during restore, so I had to restructure. I'd added metarepository to it to get btsync - rebuilding the jbod caused all local data to be wiped - including the metarepository - which is no longer available. So I'm stuck with only the ability to create SMB shares. I tried using duplicati to create a backup, but as I just want a plain 1:1 synced copy it doesn't really scratch that itch. (I am looking at it for jottacloud, which I'm hoping to use as a cloud-backup location though.) Having absolutely scoured the internet and the community applications I haven't come across anything that would let me do this. I've mounted all the shares in unraid and they are accessible. (Duplicati could use them, too.) But I'm essentially just looking for something that's able to toss my media folders in the proper network shares and keep any changes synced. That's it. At the moment I am backing up the most critical data using my workstation as an intermediary, but that's hell on both transfer speeds as well as long-term viability. I've got 16TB's of data I need to keep track of. Can anyone tell me of a good way to accomplish this that doesn't require anything on the NAS540's side? Cheers!
  8. Parity was, indeed, much quicker once I stopped messing around. All up and running. I'm very happy with everything so far. I've got my shares, my plex, medusa and transmission going. Next step is getting my website back up and running, but that's for a later date, having a hard time finding a good guide for it. Considering I used a junk-drive I had lying around to set this up, I'd rather get the running configuration on to one of the new flash drives I bought specifically for UnRaid before I buy my license. Might as well. What's the best way to go about it - just so I don't screw anything up. Cheers!
  9. I stopped the parity and am restoring data. It takes a while, still. Guessing 3-4 days. We'll see if that changes the six-day parity sync time I was looking at. Cheers.
  10. I haven't actually experienced anything negative from doing both. Not that I've noticed at any rate. It handles very differently than the RAID5 setup I'm accustomed to, so if there are issues I've likely just chalked them up to that. Are there actual downsides to what I'm doing, or will parity just take longer to achieve this way? I just had a thought that it might not be best practice and wanted to ask either way. What of setting up dockers and vm's? Also wait for everything else to finish? Cheers.
  11. Hi all I've been on a.. path.. failed motherboard and not one but two of my four 6TB WD Reds gave out on me. Back with a freshly RMA'ed motherboard and two brand-spanking new 10TB Reds. I have put one 10TB in as parity, and the other to replace the failed data drive - as I need the storage. I plan on getting another 10TB drive for a second parity next month, they're a bit expensive after all. Not to worry, I got the two drives from different retailers. Regardless: I am building parity, and it takes ages. I've begun adding a few shares and moving data back from my backup box. (NAS540 with JBOD archive drives - so speed is of the essence, really..) It will take days to make parity and days again to transfer the 13 terrabytes of data.. but am I doing this wrong? Should I not worry about parity until I'm done moving over everything? I've added a 250GB cache SSD to the server, to help speed things up, but it kept running full, which prompted me to actually set the "mover" to run hourly - and it seems it hasn't stopped "moving" ever since. To summarize; Should I stop parity until data transfer is complete, or doesn't it really matter one way or the other? Thank you.
  12. Guide does say 32 gig limit for USB drive. I think I'll just get the Kingston one and see where that gets me. I have a few of them already, but USB3, so guess I gotta shell out 🙂