Any deals on Unraid?


CSIG1001

Recommended Posts

2 hours ago, CSIG1001 said:

Also i would think using cache drives in raid would also help

Cache will just get in the way of a large transfer since it won't have the capacity, and there is no way to move from cache to array as fast as you can write to cache. Mover is intended for idle time. Best to leave cache out of any initial data load.

 

2 hours ago, CSIG1001 said:

one can only get 50mb/s transfers

Something you can do to increase write speed to the array is to not install parity until after the initial data load. Then you won't have the overhead of parity updates.

 

There is also another method of updating parity that is faster. The 2 methods and their tradeoffs are explained here:

 

https://forums.unraid.net/topic/50397-turbo-write/

 

Link to comment
1 hour ago, trurl said:

Cache will just get in the way of a large transfer since it won't have the capacity, and there is no way to move from cache to array as fast as you can write to cache. Mover is intended for idle time. Best to leave cache out of any initial data load.

 

Something you can do to increase write speed to the array is to not install parity until after the initial data load. Then you won't have the overhead of parity updates.

 

There is also another method of updating parity that is faster. The 2 methods and their tradeoffs are explained here:

 

https://forums.unraid.net/topic/50397-turbo-write/

 

Thanks for the advice i remember someone mentioned earlier to not use cache and you explained it pretty well for me .  Also you mentioned its ok  to not use any parity  hdds during the transfer. After the  data is moved i just install the parity discs 2 x hdds and where do i tell unraid to ensure the parity works? I am assuming it has to do something build the array for parity?

Link to comment

When I first built my server, I had a couple of 1TB drives and a couple of smaller drives in my Windows PC. As I recall, I bought 2 x 1TB drives for the server. I copied the first drive from my Win machine to the server with the parity drive installed, which was slow. Once I had that copied, I pulled that drive from my Win box, installed it internally and precleared it (no longer necessary) on unRAID , then copied the 2nd 1TB drive from Win to the freshly installed drive on the server. I put up with the slow speeds because I didn't have enough hardware to do it any other way. It wasn't bad for 2TB give or take. 

 

In my description of "reusing hardware", I mentioned installing the drives internal to the server via direct SATA connection. You can mount the drive in an external dock, but most docks these days are USB, and even if it's USB3, it'll still be slower than a direct SATA connection. Since my instructions were under the assumption that you'd be moving the data off of the drive, then immediately reformatting the drive to be used in the server for additional incoming data, there's no sense in mounting it in a dock, doing the slower USB transfer, then shutting down the server to mount it internally via SATA. You're going to have to shut down the server at some point, may as well be up front and take advantage of the full SATA speed for the internal transfer.

 

But, it's up to you.

 

Yes, when 100TB drives are common place, there won't be much need for a multi-drive server for massive storage capacity, however unRAID 

A) gives you parity protection against bit-rot (it is NOT a backup solution in and of itself).

B) is a great platform for Dockers and VMs, allowing you to use one, otherwise lightly used CPU in place of many individual boxen

C) a great place for multi-petabyte storage since 4K video will be yesterday's news and movie and TV rips will be multi TB each for an "acceptable" level of quality, so a 100TB drive really won't be that much after all... (Seriously, a full BR disk is +-50GB now, 4k is only getting bigger.) Much like projects will expand to take all the time allotted to them and then some, storage requirements will do so as well. (IMHO)

Link to comment
3 hours ago, FreeMan said:

When I first built my server, I had a couple of 1TB drives and a couple of smaller drives in my Windows PC. As I recall, I bought 2 x 1TB drives for the server. I copied the first drive from my Win machine to the server with the parity drive installed, which was slow. Once I had that copied, I pulled that drive from my Win box, installed it internally and precleared it (no longer necessary) on unRAID , then copied the 2nd 1TB drive from Win to the freshly installed drive on the server. I put up with the slow speeds because I didn't have enough hardware to do it any other way. It wasn't bad for 2TB give or take. 

 

In my description of "reusing hardware", I mentioned installing the drives internal to the server via direct SATA connection. You can mount the drive in an external dock, but most docks these days are USB, and even if it's USB3, it'll still be slower than a direct SATA connection. Since my instructions were under the assumption that you'd be moving the data off of the drive, then immediately reformatting the drive to be used in the server for additional incoming data, there's no sense in mounting it in a dock, doing the slower USB transfer, then shutting down the server to mount it internally via SATA. You're going to have to shut down the server at some point, may as well be up front and take advantage of the full SATA speed for the internal transfer.

 

But, it's up to you.

 

Yes, when 100TB drives are common place, there won't be much need for a multi-drive server for massive storage capacity, however unRAID 

A) gives you parity protection against bit-rot (it is NOT a backup solution in and of itself).

B) is a great platform for Dockers and VMs, allowing you to use one, otherwise lightly used CPU in place of many individual boxen

C) a great place for multi-petabyte storage since 4K video will be yesterday's news and movie and TV rips will be multi TB each for an "acceptable" level of quality, so a 100TB drive really won't be that much after all... (Seriously, a full BR disk is +-50GB now, 4k is only getting bigger.) Much like projects will expand to take all the time allotted to them and then some, storage requirements will do so as well. (IMHO)

Thank you for the info!  
Quick question  regarding cache drives for my future setup

 

is having 1tb in raid ssd cache enough or would 2TB be better if  so  why is 2tb better to have 4tb in raid for cache?

Thanks!

Link to comment
1 minute ago, CSIG1001 said:

Thank you for the info!  
Quick question  regarding cache drives for my future setup

 

is having 1tb in raid ssd cache enough or would 2TB be better if  so  why is 2tb better to have 4tb in raid for cache?

Thanks!

#ItDepends

 

What are your needs?

 

Traditionally, you want/need cache drive(s) for increasing write throughput to the server. Writing directly to the parity protected array requires reading and writing so parity can be recalculated to reflect the new content being written. The cache drive(s) allow for a non-parity protected write to the server (usually on very fast SSDs these days), then the Mover is scheduled to run nightly when there's low system usage to move data from cache to the parity-protected array. Since you can now create a cache pool, the cache itself can also be protected (RAID 1), so there's less chance of losing data written to cache.

 

More recently, Dockers are usually stored on the cache directory (the appdata share is the recommended and default location, and it's recommended that it be set to "cache preferred"). This gives the advantage of (usually, in the case of SSDs) faster access to the docker file systems, and, with cache pools, more data protection from mirrored data writes, as well.

 

How much is enough? It depends on how many dockers you're planning on running and how big the images are (sticking to a smaller set of docker authors usually reduces the footprint of each docker as they can share some of the base file system files), and it depends on what your daily write load is expected to be. If you're doing video production and you're writing large video files to your server every day, you might need 4+TB. If you're coming home from a 15-day vacation and you dump a few 128GB SD cards of photos to your server, you'll need 0.5TB max for that. If you're downloading BR-rips (disclaimer: not recommended or endorsed by Limetech, but let's face it, lots of dockers are dedicated to just that), figure on 50GB/movie times however many per day.

 

Me, I've got a 240GB SSD cache drive and have run out of room a couple of times when I was new to dockers and had the config wrong (writing logs to the cache drive will fill it up pretty quick). I've managed to max it out a couple of other times, but it's rare. Setting your shares to Cache "Yes" will write to the cache until it fills up, then your write speed will drop as it continues to write directly to the array. I've just purchased 2 new 240GB SSDs and will be adding them to the pool for a 3-drive BTRFS RAID 1 array that will give me about 360GB of protected space. I wanted it mainly for the protection, the space bump is a bonus, and I wasn't really thinking about the math on 3 drives... :/ 

Link to comment
6 hours ago, FreeMan said:

#ItDepends

 

What are your needs?

 

Traditionally, you want/need cache drive(s) for increasing write throughput to the server. Writing directly to the parity protected array requires reading and writing so parity can be recalculated to reflect the new content being written. The cache drive(s) allow for a non-parity protected write to the server (usually on very fast SSDs these days), then the Mover is scheduled to run nightly when there's low system usage to move data from cache to the parity-protected array. Since you can now create a cache pool, the cache itself can also be protected (RAID 1), so there's less chance of losing data written to cache.

 

More recently, Dockers are usually stored on the cache directory (the appdata share is the recommended and default location, and it's recommended that it be set to "cache preferred"). This gives the advantage of (usually, in the case of SSDs) faster access to the docker file systems, and, with cache pools, more data protection from mirrored data writes, as well.

 

How much is enough? It depends on how many dockers you're planning on running and how big the images are (sticking to a smaller set of docker authors usually reduces the footprint of each docker as they can share some of the base file system files), and it depends on what your daily write load is expected to be. If you're doing video production and you're writing large video files to your server every day, you might need 4+TB. If you're coming home from a 15-day vacation and you dump a few 128GB SD cards of photos to your server, you'll need 0.5TB max for that. If you're downloading BR-rips (disclaimer: not recommended or endorsed by Limetech, but let's face it, lots of dockers are dedicated to just that), figure on 50GB/movie times however many per day.

 

Me, I've got a 240GB SSD cache drive and have run out of room a couple of times when I was new to dockers and had the config wrong (writing logs to the cache drive will fill it up pretty quick). I've managed to max it out a couple of other times, but it's rare. Setting your shares to Cache "Yes" will write to the cache until it fills up, then your write speed will drop as it continues to write directly to the array. I've just purchased 2 new 240GB SSDs and will be adding them to the pool for a 3-drive BTRFS RAID 1 array that will give me about 360GB of protected space. I wanted it mainly for the protection, the space bump is a bonus, and I wasn't really thinking about the math on 3 drives... :/ 

wow this is a really good write up to my question i appreciate that thank you.  I checked my plex folder now and it is roughly 100gb just for meta data so i think i am going to opt to get 2tb SSDs 860 EVOs in Raid 1  which should meet my requirement for future expansion and copying  over  large remux mkv files 

 

Thanks for your help much appreciated.

Link to comment
11 hours ago, CSIG1001 said:

1. Do not add ssd cache until files are moved

You can install cache before the initial data load. The default for a user share is to not use cache. As long as you don't change that on any user share until after the transfer then cache won't be used.

 

You should install cache before you enable Docker and VMs though, so those can be created on cache where they belong. If you wind up with them on the array it can be a little extra trouble to get them onto cache since Mover can't move open files.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.