falconexe 187 Posted August 24, 2020 Share Posted August 24, 2020 Another great tutorial on this topic if you just want a straight forward encrypted cloud sync. Quote Link to post
falconexe 187 Posted August 24, 2020 Share Posted August 24, 2020 Here is another way of performing block based backups (true backups) not syncing. You can use the same cloud services, or even UNRAID to UNRAID. Quote Link to post
falconexe 187 Posted August 24, 2020 Share Posted August 24, 2020 9 hours ago, BRiT said: You should check out this thread on how Google Drives and Rclone can provide unlimited storage for around $12 a month. They don't actually require 5 users and they don't limit storage to 1TB when less than 5 users. Some users have well over 200 TBs. I was using BackBlaze via a fleet of external USB3 drives and a Windows 10 client with SyncFolders. Needless to say, it was a bit much ha ha. I've been looking for something quick, secure and native to UNRAID for a LONG time. I just setup GSuite tonight and have my ENTIRE server syncing to the cloud and uploading at a staggering 250Mbit/s (Peaks at 35 Megabytes/s!). I have fiber with 1Gig Up though... One note, in order to setup GSuite, YOU WILL NEED A DOMAIN. That was a bit surprising, but I already had one so just hooked it up through GoDaddy authentication pass-through (GSuite prompts for this). Thanks for the suggestion. Instead of using RClone, I ended up using the Duplicati docker with encryption. So far, so good! 1 Quote Link to post
falconexe 187 Posted August 24, 2020 Share Posted August 24, 2020 7 hours ago, falconexe said: I just setup GSuite tonight and have my ENTIRE server syncing to the cloud and uploading at a staggering 250Mbit/s (Peaks at 35 Megabytes/s!). I have fiber with 1Gig Up though...So far, so good! Welp, there are drawbacks from having fiber internet. I already hit the limit of GSuite (Google Drive) in just a few hours at those sustained upload speeds. So apparently, you can only upload 750GB in a single 24 hour time-frame (shared across all GSuite products). I hit exactly 725.4GB before Duplicati started throwing server side errors. I have since throttled my uploads to 8 MB/s to keep it under this ceiling. (Math works out to 691.2 GB/Day [8MB * 86,400 Sec / 1000MB = 691.2GB/Day]. 9 MB/s puts it over, and the parameter has to be a whole number). This should keep Duplicati happy and support uninterrupted backups during my initial upload set. This would never be a problem once all files are initially backed up, but it is an interesting facet of the this solution's workflow. Quote Link to post
BRiT 308 Posted August 24, 2020 Share Posted August 24, 2020 11 minutes ago, falconexe said: Welp, there are drawbacks from having fiber internet. I already hit the limit of GSuite (Google Drive) in just a few hours at those sustained upload speeds. So apparently, you can only upload 750GB in a single 24 hour time-frame (shared across all GSuite products). I hit exactly 725.4GB before Duplicati started throwing server side errors. I have since throttled my uploads to 8 MB/s to keep it under this ceiling. (Math works out to 691.2 GB/Day [8MB * 86,400 Sec / 1000MB = 691.2GB/Day]. 9 MB/s puts it over, and the parameter has to be a whole number). This should keep Duplicati happy and support uninterrupted backups during my initial upload set. This would never be a problem once all files are initially backed up, but it is an interesting facet of the this solution's workflow. Thats why they suggest using Service Accounts in the rclone thread. They have the means to automatically cycle through them to remove these limits when using rclone. 1 Quote Link to post
falconexe 187 Posted August 24, 2020 Share Posted August 24, 2020 8 minutes ago, BRiT said: Thats why they suggest using Service Accounts in the rclone thread. They have the means to automatically cycle through them to remove these limits when using rclone. Good to know. Is there anything like that for use with Duplicati? I saw that with RClone, you can get an API ID. Quote Link to post
JorgeB 3474 Posted August 25, 2020 Share Posted August 25, 2020 Please stay on topic and start another another thread if you want to continue that discussion. 1 Quote Link to post
falconexe 187 Posted August 25, 2020 Share Posted August 25, 2020 2 minutes ago, johnnie.black said: Please stay on topic and start another another thread if you want to continue that discussion. Yeah... no worries. Quote Link to post
Paul_Ber 9 Posted December 15, 2020 Share Posted December 15, 2020 (edited) I have the 6TB WD60EFAX that WD never said what type it was until the spring 2020, well these are SMR. Model numbers of the drives and SMR or CMR: 2TB WD20EFAX SMR 2TB WD20EFRX CMR 3TB WD30EFAX SMR 3TB WD30EFRX CMR 4TB WD40EFAX SMR 4TB WD40EFRX CMR 6TB WD60EFAX SMR 6TB WD60EFRX CMR I contacted WD and they are doing an advanced RMA without giving them my credit card number. They are sending me a WD60EFRX 1st. Then once I have done all the preclear, swap, rebuild, wipe old drive it should be a few days I guess. I already have: WD60EFRX parity WD30EFRX WD30EFRX WD60EFAX the drive i will be exchanging WD50EFRX not added to array yet When do increase the size of the array I usually buy a bigger parity drive and the old parity drive replaces a smaller drive. Limited to 4 drives at the moment, and 2 cache. WD has a weird naming convention because the have a WD80EFAX that is CMR. Edited December 15, 2020 by Paul_Ber Quote Link to post
Hoopster 679 Posted December 15, 2020 Share Posted December 15, 2020 8 minutes ago, Paul_Ber said: WD has a weird naming convention because the have a WD80EFAX that is CMR I think all models 8TB and above are CMR only so they don't worry about model number distinctions as much, but, it's still confusing given the naming conventions on the lower capacity models. Quote Link to post
510 posts in this topic Last Reply
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.