Jump to content

Helmonder

Members
  • Posts

    2,818
  • Joined

  • Last visited

Everything posted by Helmonder

  1. Nope.. Looks like its the other way around: <dataDeDupAutoMaxFileSize>1073741824</dataDeDupAutoMaxFileSize> <dataDeDupAutoMaxFileSizeForWan>1</dataDeDupAutoMaxFileSizeForWan> You have it setup exactly the opposite, meaing that it will dedup unless its a very large file... Dedup costs a lot of cpu and memory and makes crashplan slow..
  2. Anything else using your bandwidth? It all depends.. I regularly get 45Mb.. Verzonden vanaf mijn iPhone met Tapatalk
  3. I am using: jlesage/crashplan-pro But hey ! If its on the host then I can do that right away, thanks !! EDIT: Just did that, did not immediately work, crashplan also started to restart.. So I also increased the memory to 8gigs (it has been working on 4 gigs sofar and I have backupped 14 terabyte of files to crashplan without a problem...) Crashplan's advice is to have 1gig per terabyte, but that is based on the average amount of files on a system, and not on media storage (which typically is a lot of data but not so much files / large files). If the memory usage extrapolates in the same way I should be fine until I hit 35tb ( in the allready backupped set there also is my mysic library which is a lot of files again).
  4. My crashplan is not backup up anymore.. Seems to have happened since the latest update.. Although it might also be a coincedence. I have troubleshooted the issue towards not enough inotify triggers, as described in this post: https://support.code42.com/CrashPlan/4/Troubleshooting/Linux_real-time_file_watching_errors I want to make the fix and for that I ofcourse have to log into the crashplan docker.. However when I give the command: docker exec -it CrashPlanPRO bash it does not work and I get the following error message: rpc error: code = Unknown desc = oci runtime error: exec failed: container_linux.go:262: starting container process caused "exec: \"bash\": executable file not found in $PATH" The same command works fine with my other dockers..
  5. The beta is running fine for many weeks over here.. However it IS still a beta.. So if you are cautious with that just hold on a little longer.. In a week or so I am expecting this to be released and you will have no more problems.. I just killed off my latest KVM and running Docker only.. No problems whatsoever.
  6. It is possible in unraid 6.4, currently in beta. I was in the exact same situation as you and it works like a charm now.. Plex has its own IP address and is now bypassing the outgoing VPN.
  7. Agree on that... I would pay for that function..
  8. That would work also.. There are a few ways to recreate the crashplan experience, its only that crashplan had it all nice in one package
  9. So what you are saying is that whenever I delete a file on my main system it will be moved to sync/archive on my secundary system ? That actyally sounds great... I only have to monitor that archive dir for accidental deletions...
  10. Mmm.. Sounds like something they can build in though.. For me it still is a viable option since it will still protect me from malware (since that constitutes a change, not a deletion to a file).. A deleted file can still be found on crashplan for business.. But since it will take a sh*tload of time to rebuild from a crashplan online environment I need to have a local set available to.. Must say it is EXTREMELY slow though.. For something running local..
  11. Ehm nope... deleted files are stored in the archive and can be kept unlimited..
  12. In addition to switching to crashplan small business I have now also enabled BTSYNC using the Linuxserer.io docker. This fills the gap the local backups had, it does versioning, even with unlimited versions if you want. So basically that is totally the same functionality as with crashplan, added benefit is that the data is just there on the other side and readable instead of encrypted in a backup format.. So basically with very little effort I am now exactly where I was before with Crashplan, only i pay less then I did before over a period over 1 1/2 year and I have a better local backup solution.. Things are great :-)
  13. This post: Also worked for me... Adoption worked without a problem. Crashplan PRO is backin up again.
  14. That was what I was doing.. Only I was backing up the other systems by using crashplan.. So I need to find another solution for that..
  15. Have you made the change in the crashplan xml ? The basic reason crashplan is slowing over time is that is continuously trying to dedupe everything.. With media files that is actually not very usefull but crashplan still takes time to try and do it... I have made my crashplanb 20 times faster by making the following change: 1) Stop the crashplan docker 2) Open the my.service.xml (its in the crashplan/conf folder) 3) Find the following line: <dataDeDupAutoMaxFileSizeForWan>0</dataDeDupAutoMaxFileSizeForWan> 4) Now change it to: <dataDeDupAutoMaxFileSizeForWan>1</dataDeDupAutoMaxFileSizeForWan> ( so change the ZERO into a ONE) 5) Restart crashplan.. Crashplan will no longer do any deduping and that really makes stuff faster. I did not think of that myself, its from the following post: http://networkrockstar.ca/2013/09/speeding-up-crashplan-backups/
  16. I think you should... for the next year you will be on 2,50 a month... And even if the price remains 120 after that, that really is not that much for data safety in my opinion..
  17. Actually the small business deal is not so bad... At least not if you compare it to the online home option.. I was backuping to the crashplan home cloud, this subscription just continues to small business and after it continues for a yeat at a 75% discount, that works out to EUR 2,50 each month... That is cheaper then the crashplan home option... I am covered untill into 2019 this way.. At that we'll see what happens.. What I am missing is the "backup to another server at home or a friend" option.. I am actually still waiting for limetech to build in a "server mirror/backup" option.. Something to keep two servers in sync.. This would be of great benefit, probably not to difficult (mainly some rsync options) and it will be a great way to sell extra server licenses.. JonP ? Something like that in the works ? It was on the agenda at some point..
  18. Interesting... It does not sound to difficult to GLOBALLY exclude a disk at the moment it shows some kind of misery... Globsl exclusion could be set on temp, but also based on smart values, a read error, etc... Does not sound like a bad idea actually and the basic functionality is already there.. (ofcourse this would only work if people use user shares... disk shares would still be possible to write to..)
  19. what is then the correct way to determine if a docker is set up for a different ip address ?
  20. Fix Common Problems keeps telling me that my Crashplan and Plex docker are configured wrong, they have the wrong network type... They actually do not.. since the new unraid version it is possible to give dockers their own ip address and this is what I have chosen to do very specifically.. Maybe it is possible to change the check in such a way that when the other requirements for a docker ip address are met the interface type is no longer recognised as an error ? This is what it shows now: Jul 29 17:52:19 Tower root: Fix Common Problems: Error: Docker Application CrashPlan is currently set up to run in br0 mode ** Ignored Jul 29 17:52:19 Tower root: Fix Common Problems: Error: Docker Application plex is currently set up to run in br0 mode ** Ignored This is because the "Network type" is set to BR0, that however is not wrong when you need the docker to have its own IP address, if you want this then the "fixed ip address" field is also filled. So if BR0 is set for these containers and the fixed ip address field is empty, then it should trigger an error If BR0 is set and the fixed ip address is fild, then there should be no error..
  21. and doing it wrong causes overheating..
  22. Maybe you have already made sure this is ok, but double check the airflow in your system... I used to have a system that just ran to hot and it appeared that the exhaust fan was actually blowing inwards Front fans mostly should pull air -into- the case and back fans should blow it out again..
  23. The only thing needed is fitting this nicely in the interface, now if we could find someone who would be amazin at that....
  24. I must try this out... what you are describing is the exact reason i have plex running in a dedicated vm.. would love to have a docker with a seperate ip address.. I am somewhat reluctant in doing it this way though.. since it is not formally supporter it could break with an update ? Soinds like something that wpuld be great to fit in the gui itself.. Verzonden vanaf mijn iPhone met Tapatalk
  25. I have a 30 TB unraid that is backupping through crashplan to my backup unraid system.. Crashplan has 16GB and that is more then enough.. The 1GB per TB is made for a typical spread in files... When you are mostly storing movies and series I think the number of files you are backupping is much lower than an average so you can get away with less memory..
×
×
  • Create New...