Storing large files


Recommended Posts

Hi! I'm new here 🙂

 

I understand how I'm supposed to handle large files (i.e. set minimum free space larger than largest file). However, my file sizes vary from Bytes to (possibly) Terabytes. Definitely similar to or even larger than some disks in the array, which would not only waste space, but entire disks.

 

Is there a way to use the largest disk as cache and have Unraid move the files from there to the disk with the lowest amount of free space on which they will fit? Storing it on the cache disk first will make the filesize known to Unraid and the cache size in turn is a known to me as a limitation. This would allow me to completely fill even small disks, while retaining the largest possible chunks of free space on the larger disks for files that actually are that big.

Edited by ventrue
Typo
Link to comment
19 minutes ago, ventrue said:

Hi! I'm new here 🙂

 

I understand how I'm supposed to handle large files (i.e. set minimum free space larger than largest file). However, my file sizes vary from Bytes to (possibly) Terabytes. Definitely similar to or even larger than some disks in the array, which would not only waste space, but entire disks.

 

Is there a way to use the largest disk as cache and have Unraid move the files from there to the disk with the lowest amound of free space on which they will fit? Storing it on the cache disk first will make the filesize known to Unraid and the cache size in turn is a known to me as a limitation. This would allow me to completely fill even small disks, while retaining the largest possible chunks of free space on the larger disks for files that actually are that big.

No way to do what you want and even if you could it would not help as mover does not take into consideration the size of a file it is about to move when selecting a target drive.

 

For very large files it is easiest to move these directly to the desired drive by-passing cache completely.    You would need to have Disk Shares enabled to do this.

Link to comment

If your array has a mix of very small disks and very large ones you should consider how disks are allocated to shares. The default is for all disks to be available to all shares but you might want to change this to reduce the risk of running out of space when writing one of the very large files. The order of the disks in the array and the choice of allocation method might be significant, too. Another thing you might want to consider is moving some of the smaller disks out of the main array and making them into a pool. Using btrfs RAID you can combine their capacities and (unlike the main array) store files that are bigger than the capacity of a single disk. With more information about your use case it might be possible to make more suggestions.

Link to comment

Well, believe it or not, but the only significant reason for me to use a NAS is that everything else in my PC is silent and I want to throw the single, noisy HDD that I still need for those large files out of the room. I try to move the files I'm working on onto SSDs for more speed anyway, but if they're too large that's not possible and then the HDD has to keep running. That annoys me.

Everything else is just an afterthought (i.e. moving personal files to the NAS, using parity, abusing parity to give drives with bad sectors another chance, etc.). Unfortunately, if I want to use parity, I will have to use that one large drive as the parity drive, which will waste 4TB of storage and require me to throw literally everything else I have lying around into the array in order to not lose too much space. I should probably just buy some larger drives, but I'm not willing to pay the inflated prices we're seeing right now. Luckily Chia isn't farmed on actual soil, or we'd all be starving in a year!

 

The easiest way to go right now would probably be to just use the one, large HDD I've got as the only disk, if Unraid can't do any organising in the background. I don't really want to deal with that myself too much.

 

But, are you sure the mover doesn't care about file sizes? I mean, once something's in the cache, the file size should be known and surely the mover considers that? If so, I could order the HDDs from small to large and have unraid fill them up in that order. That would pretty much do what I want then, wouldn't it?

Link to comment
20 minutes ago, ventrue said:

abusing parity to give drives with bad sectors another chance

This is the wrong way to think about parity. Any bad drive in your array puts data at risk. Parity by itself cannot rebuild anything. Parity PLUS ALL other disks is required to rebuild a failed disk.

Link to comment
2 hours ago, trurl said:

This is the wrong way to think about parity. Any bad drive in your array puts data at risk. Parity by itself cannot rebuild anything. Parity PLUS ALL other disks is required to rebuild a failed disk.

 

Right now, there's no redundancy at all. Building that up and then needing it a little bit more than before is a net gain, I'd say. I do have real time backups as well, should the worst happen. I don't trust storage devices.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.