Jump to content

gregnostic

Members
  • Posts

    7
  • Joined

  • Last visited

Posts posted by gregnostic

  1. 23 hours ago, John_M said:

    50 GB free on an XFS-formatted disk that's essentially used in read-only mode is plenty. I fill up disks on the basis that once full the contents will seldom if ever change and manually add files until they have as little as 10 GB free and it's fine. There was an update to XFS some years ago that used up some of the free space, but only a small percentage of the 10 GB.

    Ultimately, it's true that I could change how I work with files to work around this problem. I could create two media shares--one for files that change and one for files that don't--and assign those shares to different disks to maximize usage on the disks with purely static data. But if I did that, I'd have to stop organizing files based on what they are and start organizing them based on how they behave. And frankly, that's not appealing to me.

     

    I like being able to have a media share where I can put my media files, which sometimes change (e.g. recorded TV shows that get deleted after being watched). I like not having to worry about whether I plan to keep something forever or just for a while.

     

    If Unraid allowed me to set a minimum free space based on a percentage rather than a fixed byte value, I'd be able to retain all that flexibility while giving up a minimal amount of storage space. To my mind, losing just 5% of my storage space is a fair trade for that. It might not be for everyone, and that's fine. But I'd really like to have the option.

  2. I actually just finished adding a response to that very thread.

     

    If you have a 16 TB hard drive and want to set a comfortable 5% space free, you're looking at setting a fixed value of 800 GB. If your cache disk is only 512 GB, it's never going to touch your cache and always write directly to the 16 TB drive. Applying a fixed byte value to a setup with variable disk sizes doesn't work.

  3. Huge +1 on this. The cache already has its own global "Minimum Free Space" option under the "Cache Settings" header. Unraid should not ignore this in favor of a share's free space settings, which should be a value optimized for the array disks. Overriding the global cache value with something share-specific leads to negative, unexpected behavior.

     

    Separate to that issue, I'm also going to echo what ados said above: when you have an array with multiple disk sizes, a static number of bytes to keep free doesn't make sense. Keeping enough free space for a large drive means a small percentage of usable space on a small drive.

     

    Instead, I propose we be allowed to set a percentage in the "Minimum Free Space" fields. Setting a value of, say, 10% would allow you to reserve 1 TB free on a 10 TB drive but only reserve 500 GB on a 5 TB drive. By having to set a fixed value in bytes, you're either losing drive usage on lower drives OR you're risking filling up a disk too much and slowing it down due to fragmentation. Neither of these is a good option.

     

    (I also realize that the intent behind the field's help text of "Choose a value which is equal or greater than the biggest single file size you intend to copy to the share." is intended to prevent fragmentation, but in the case of a share where files may be deleted and written a lot, this fragmentation can happen even if all the files are under that threshold.)

  4. 14 hours ago, itimpi said:

    You might want to check what value you have set for Minimum Free Space for the shares in question?     It is not obvious, but the larger of the global share setting and the individual share setting is used to decide if the file should go to the cache.

    Thanks for the answer. I'm going to have to think about how I want to approach this problem because I don't want to fill up my disks to the brim. If I do what that field's help text says and use the size of the largest files I deal with (about 50 GB), that's 99.5% usage on a 10 TB disk, which is generally a bad thing to do to a hard drive. (That's why I set the value as high as I do.)

     

    I had actually briefly considered this as a possibility but I dismissed it because the help text on the Minimum Free Space field has this to say:

    Quote

    The minimum free space available to allow writing to any disk belonging to the share.

    To my mind, that doesn't include the cache. It makes no sense to me whatsoever to include the cache in this value.

     

    14 hours ago, itimpi said:

    It is also a good idea to use the suffixes rather than entering just a numeric value (especially as that value is not an absolute number but the number of KB) as it can be easy to get the number of zeroes wrong.

    I have "1 TB" as the input. It displays in the UI as "1 TB" but it apparently stores the value as a plain number.

     

    14 hours ago, itimpi said:

    There has been a feature request raised that the User Share value for Minimum Free space should not apply to the cache, and that the cache should only use the value set under Settings >> Global Share Settings, but I have no idea if that is going to happen.

    I sure hope this happens. This behavior makes next to no sense. Worse, the help text is simply wrong.

  5. I'm in the process of ripping a bunch of my Blu-rays and I've run into an issue where Unraid is bypassing my cache and writing files directly to the array when the cache reaches about half-full. As long as the cache remains at that capacity, ALL new file transfers are bypassing the cache and being written directly to the array.

     

    My cache drives are WD Blue 2TB SSDs, and I've been using them for over a year with no issues. I've been able to write nearly the full 2TB to the cache drives in the past (the last time I recall doing this was likely on 6.7.x).

     

    I'm including a screenshot of the most recent example of this happening. When I took the screenshots, I was trying to transfer over 1TB of Blu-ray rips to the server. The first half of the transfer went along at a reasonable clip--about 105 MB/s consistently for about two hours. Then, all of a sudden, the transfer falls off to anywhere from 20 MB/s to 55 MB/s. I look in Unraid and see that cache drive is no longer receiving writes (at least not from this transfer) and it's going to an array disk instead.

     

    Note in the screenshot that I've included a LibreSpeed speedtest result (LibreSpeed is hosted on Unraid in Docker). The file transfer is still happening and the speedtest was being run with that going on in the background. This speedtest thus shows spare throughput between the client and server, so it's not a networking issue.

     

    I've also seen this issue happening in recent days trying to rip discs directly to Unraid (my usual workflow). When it reaches roughly ~960 GB of storage space used, new files ripped start getting written to an array disk. Because that can't keep up with the data from three simultaneous rips, I started ripping to my local machine, hence this batch transfer.

     

    All these transfers are happening over SMB.

     

    Server: Unraid 6.8.3, Xeon E5-2630v4 (10-core, 2.2GHz), SuperMicro X10DRi w/Intel i350 LAN, 32GB RAM, etc.

    Client: Windows 10 v1910, Ryzen 3800X, 16 GB RAM, 2x 4TB HGST NAS in RAID1

     

    I'm happy to provide any other relevant information if it'll help figure out what the heck is going on here.

    unraid-cache-issue.png

×
×
  • Create New...