Seagate 8TB Shingled Drives in UnRAID


510 posts in this topic Last Reply

Recommended Posts

  • Replies 509
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Randomly, no. But at a moment of our choosing...

I am posting this to continue my contribution to this thread. As you all know I am a big supporter of these drives for the typical unRAID use case. My bi-directional transfer speeds are rock solid and

I'd recommend the HGST 8TB non-Helium for parity.   https://www.bhphotovideo.com/c/product/1303685-REG/hgst_0s04012_8tb_3_5_sata_internal.html   It is quite a bit faster (7200RPM v

Posted Images

9 hours ago, BRiT said:

 

You should check out this thread on how Google Drives and Rclone can provide unlimited storage for around $12 a month. They don't actually require 5 users and they don't limit storage to 1TB when less than 5 users. Some users have well over 200 TBs.

 

I was using BackBlaze via a fleet of external USB3 drives and a Windows 10 client with SyncFolders. Needless to say, it was a bit much ha ha. I've been looking for something quick, secure and native to UNRAID for a LONG time. I just setup GSuite tonight and have my ENTIRE server syncing to the cloud and uploading at a staggering 250Mbit/s (Peaks at 35 Megabytes/s!). I have fiber with 1Gig Up though... One note, in order to setup GSuite, YOU WILL NEED A DOMAIN. That was a bit surprising, but I already had one so just hooked it up through GoDaddy authentication pass-through (GSuite prompts for this).

 

Thanks for the suggestion. Instead of using RClone, I ended up using the Duplicati docker with encryption. So far, so good!

Link to post
7 hours ago, falconexe said:

I just setup GSuite tonight and have my ENTIRE server syncing to the cloud and uploading at a staggering 250Mbit/s (Peaks at 35 Megabytes/s!). I have fiber with 1Gig Up though...So far, so good!

 

Welp, there are drawbacks from having fiber internet. I already hit the limit of GSuite (Google Drive) in just a few hours at those sustained upload speeds. So apparently, you can only upload 750GB in a single 24 hour time-frame (shared across all GSuite products).

 

I hit exactly 725.4GB before Duplicati started throwing server side errors. I have since throttled my uploads to 8 MB/s to keep it under this ceiling. (Math works out to 691.2 GB/Day [8MB * 86,400 Sec / 1000MB = 691.2GB/Day]. 9 MB/s puts it over, and the parameter has to be a whole number). This should keep Duplicati happy and support uninterrupted backups during my initial upload set. This would never be a problem once all files are initially backed up, but it is an interesting facet of the this solution's workflow.

 

 

image.thumb.png.f0a25c3ab2ecdb21ac604078216fac57.png

Link to post
11 minutes ago, falconexe said:

 

Welp, there are drawbacks from having fiber internet. I already hit the limit of GSuite (Google Drive) in just a few hours at those sustained upload speeds. So apparently, you can only upload 750GB in a single 24 hour time-frame (shared across all GSuite products).

 

I hit exactly 725.4GB before Duplicati started throwing server side errors. I have since throttled my uploads to 8 MB/s to keep it under this ceiling. (Math works out to 691.2 GB/Day [8MB * 86,400 Sec / 1000MB = 691.2GB/Day]. 9 MB/s puts it over, and the parameter has to be a whole number). This should keep Duplicati happy and support uninterrupted backups during my initial upload set. This would never be a problem once all files are initially backed up, but it is an interesting facet of the this solution's workflow.

 

 

image.thumb.png.f0a25c3ab2ecdb21ac604078216fac57.png

 

Thats why they suggest using Service Accounts in the rclone thread. They have the means to automatically cycle through them to remove these limits when using rclone.

Link to post
8 minutes ago, BRiT said:

 

Thats why they suggest using Service Accounts in the rclone thread. They have the means to automatically cycle through them to remove these limits when using rclone.

Good to know. Is there anything like that for use with Duplicati? I saw that with RClone, you can get an API ID.

Link to post
  • 3 months later...

I have the 6TB WD60EFAX that WD never said what type it was until the spring 2020, well these are SMR.

 

Model numbers of the drives and SMR or CMR: 

2TB    WD20EFAX    SMR
2TB    WD20EFRX    CMR
3TB    WD30EFAX    SMR
3TB    WD30EFRX    CMR
4TB    WD40EFAX    SMR
4TB    WD40EFRX    CMR
6TB    WD60EFAX    SMR
6TB    WD60EFRX    CMR

 

I contacted WD and they are doing an advanced RMA without giving them my credit card number.  They are sending me a WD60EFRX 1st.  Then once I have done all the preclear, swap, rebuild, wipe old drive it should be a few days I guess.

 

I already have:

WD60EFRX parity

WD30EFRX

WD30EFRX

WD60EFAX the drive i will be exchanging 

WD50EFRX not added to array yet

 

When do increase the size of the array I usually buy a bigger parity drive and the old parity drive replaces a smaller drive.  Limited to 4 drives at the moment, and 2 cache.

 

WD has a weird naming convention because the have a WD80EFAX that is CMR. 

 

Edited by Paul_Ber
Link to post
8 minutes ago, Paul_Ber said:

WD has a weird naming convention because the have a WD80EFAX that is CMR

I think all models 8TB and above are CMR only so they don't worry about model number distinctions as much, but, it's still confusing given the naming conventions on the lower capacity models.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.