Jump to content

Need some newbie help with Shares & Arrays


Nezra

Recommended Posts

so, where to start. New to unRAID, but I chose it for it's functionality for what I wanted.

 

Here's a few basic things.

 

1. The Array is set up as follows:

1x4TB Parity

1x4TB Data

3x1TB Data

1x1TB Data unassigned

 

2. Share:

 

Name: Media

Allocation Method: High Water

Minimum Free Space: 25000000

Split level: Automatically split any directory as required.

Included disks: All

Excluded disks: None

 

3. Global share settings:

 

Enable disk shares: Auto

Enable user Shares: Yes

Included disks: All

Excluded disks: None

 

4. Running Unraid Server Basic 6.1.8.

 

5. No file is larger than 10 GB.

 

6. Installed 'things': Plugins: unassigned devices; Dockers: PlexMediaServer.

 

Here's where I'm confused. I've read the documentation in the wiki, watched the videos, pressed he help icon in the GUI to make sure I set it up correctly since the wiki tends to be off with different version info, perused the forums, all to make sure I wasn't missing anything, and before I jumped on the unRaid bandwagon.

 

I have 2 issues currently.

 

1. Despite the share settings, while moving files into the array, regardless of the directory levels, it's all being funneled onto the single 4 TB disk. The other disks are getting nothing, and all say 0% utilized on the main page, and when computing shares. Now, it's up to 3.5 GB, and I still have 1 TB or so left to copy. My understanding from the high water, is that at least by this point, it would have started spreading it across disks, but it's not, and still continues to fill up the 4TB disk.

    A. Am I doing something wrong?

    B. Am I misunderstanding "high water" for allocation?

    C. Is there anything I can do to fix it?

 

2. The second issue surrounds starting the array. In addition to the drives indicated permanently installed, I have an external hard drive I am trying to move content into the array from. Upon rebooting the system, the array won't start, citing invalid configuration, consider upgrading your key yada yada yada. I only have 5 disks assigned in the pool as described above. 1 is unassigned, and the USB external drive is not mounted or shared. This "invalid configuration" persist until I remove the external drive, and refresh the page. Then, I can start the array, and re-attached the USB hard drive to the computer.

    A. Is there any way to prevent this from happening all the time when I reboot?

    B. Is the "device limit" based on what's attached, not whats included in the array, and thereby me misunderstanding what the "device limit" is relating to my key?

 

 

I'd appreciate any insight to be had with these issues. I'm sure it's either a configuration issue, a step I missed somewhere, or a misinterpretation of the documentation that is available.

Link to comment

Thanks. I was under the impression hot spares could be used that way so long as the devices themselves were assigned to an array.

 

Regarding the high water shares, at what point can I expect the data to be spread to the other drives?

Link to comment

nvm, figured it out.

 

Appears that while the manual and the website says you cannot ASSIGN more devices than the key allows, on a hardware level you cannot HAVE more than that INSTALLED to the system, regardless of how many are assigned, when you start the array. any number appears to be able to be installed and passed through to VMs and unassigned devices after the fact.

 

Documentation in the getting started guide is simply misleading.

 

IMPORTANT:  Your array will not start if you assign more devices than your license key allows.

 

 

Link to comment

Regarding the high water shares, at what point can I expect the data to be spread to the other drives?

When the 4TB has passed its current high water mark, half of its free space. Then the next drive will be used until it passes half of its free space, and so on. Then when the last drive the share includes has passed its high water mark, it will go back to the first with a new high water mark of half of its remaining free space, and so on.
Link to comment

Regarding the high water shares, at what point can I expect the data to be spread to the other drives?

When the 4TB has passed its current high water mark, half of its free space. Then the next drive will be used until it passes half of its free space, and so on. Then when the last drive the share includes has passed its high water mark, it will go back to the first with a new high water mark of half of its remaining free space, and so on.

 

Now if I'm not mistaken, since the set up here is 4TB, 1TB, 1TB, 1TB. It'll keep using the 4TB disk after the first high water mark... since half of 2TB is equal or greater then 1TB.

 

Edit for more complete explanation:

 

Your Array:

Disk1: 4TB

Disk2: 1TB

Disk3: 1TB

Disk4: 1TB

 

High-Water doesn't work the way people seem to assume it does (IMO this means the help / documentation / naming needs to be looked at so that people can understand it better...)

 

The first High-Water mark is 1/2 the size of the largest disk. Since the largest array disks is the 4TB disk the first high-water mark is 2TB. When writing to the high-water share unRAID will pick the disk with the least free space greater then the high-water mark or in this case 2TB. (This of course assumes no other conflicting logic with split levels or excludes)

 

The only disk that qualifies is the Disk1 since Disk2-4 don't have more then 2TB of free space.

 

Once you exceed 2TB on that disk, the High-Water Mark will recalculate by dividing the old high water mark by 2. In this case the new high-water mark will be 1TB.

 

Again the only disk that qualifies is Disk1 since Disk2-4 don't have greater then 1TB of free space.

 

Once that new high-water mark is met, the new high-water mark will recalculate using the same approach. In this case the new high-watermark will be 500GB.

 

At this point Disk 1 has 3TB of data and 1TB free

Disk 2-4 have 1TB free, these all have an equal amount of space above the high-water mark so it'll pick a disk likely disk1 (not sure how exactly it picks when they are equal) and fill that first. It'll continue to fill that until that disk no longer has the least free space above the high-watermark (so writing 500GB) then it'll move to the next disk and write 500GB to that one and so on...

Link to comment

Regarding the high water shares, at what point can I expect the data to be spread to the other drives?

When the 4TB has passed its current high water mark, half of its free space. Then the next drive will be used until it passes half of its free space, and so on. Then when the last drive the share includes has passed its high water mark, it will go back to the first with a new high water mark of half of its remaining free space, and so on.

 

Now if I'm not mistaken, since the set up here is 4TB, 1TB, 1TB, 1TB. It'll keep using the 4TB disk after the first high water mark... since half of 2TB is equal or greater then 1TB.

 

Edit for more complete explanation:

 

Your Array:

Disk1: 4TB

Disk2: 1TB

Disk3: 1TB

Disk4: 1TB

 

High-Water doesn't work the way people seem to assume it does (IMO this means the help / documentation / naming needs to be looked at so that people can understand it better...)

 

The first High-Water mark is 1/2 the size of the largest disk. Since the largest array disks is the 4TB disk the first high-water mark is 2TB. When writing to the high-water share unRAID will pick the disk with the least free space greater then the high-water mark or in this case 2TB. (This of course assumes no other conflicting logic with split levels or excludes)

 

The only disk that qualifies is the Disk1 since Disk2-4 don't have more then 2TB of free space.

 

Once you exceed 2TB on that disk, the High-Water Mark will recalculate by dividing the old high water mark by 2. In this case the new high-water mark will be 1TB.

 

Again the only disk that qualifies is Disk1 since Disk2-4 don't have greater then 1TB of free space.

 

Once that new high-water mark is met, the new high-water mark will recalculate using the same approach. In this case the new high-watermark will be 500GB.

 

At this point Disk 1 has 3TB of data and 1TB free

Disk 2-4 have 1TB free, these all have an equal amount of space above the high-water mark so it'll pick a disk likely disk1 (not sure how exactly it picks when they are equal) and fill that first. It'll continue to fill that until that disk no longer has the least free space above the high-watermark (so writing 500GB) then it'll move to the next disk and write 500GB to that one and so on...

 

And turns out this is exactly what happened. I incorrectly assumed after reading the documentation that it would have been the reverse. I proceeded to start copying files to the array one at a time. once it hit 498GB free(3.5TB used) on the 4TB drive, it proceeded to start filling the others. I incorrectly assumed that because the wording mentioned half of it's free space, it would start splitting, when in reality, like you said, it's 1/2 of the free space of the SMALLEST drive. In my case it turned out to be 500GB.

 

All fixed and understood now, working as intended.

Link to comment

Perhaps a better visualization would be to have rectangles of differing length representing drive sizes side by side all aligned at the top. As the data fills the drives, it fills the deepest rectangle (largest drive) until it gets to the halfway mark of the smallest drive, then it starts filling the next deepest rectangle, until once again it reaches the halfway mark. When all the drives are filled to that mark, a new high water line is set at half the remaining space, and it fills each drive to the new mark. Lather, rinse, repeat.

 

High water is a logical combination of fill up and most free. It's the allocation method least likely to cause distress if you set it and forget it, as all drives will eventually fill to an even amount of free space left, and still not spread data willy nilly amongst all the drives, so related things have a better chance of staying on the same drive without manual intervention.

Link to comment

High water is a logical combination of fill up and most free. It's the allocation method least likely to cause distress if you set it and forget it, as all drives will eventually fill to an even amount of free space left, and still not spread data willy nilly amongst all the drives, so related things have a better chance of staying on the same drive without manual intervention.

 

This is a good description. The only catch to this as the OP found out is if you have an array of mixed size disks where one is significantly larger then the others it will receive the vast majority of the writes at first.

Link to comment
This is a good description. The only catch to this as the OP found out is if you have an array of mixed size disks where one is significantly larger then the others it will receive the vast majority of the writes at first.
For most real world usage, that's not a catch, it's a benefit. It means more drives will stay spun down, and related content is guaranteed to stay on the same disk.

 

Unraid doesn't get a performance boost by spreading data since it doesn't stripe the data across multiple spindles, it actually is worse to spread the data because you may have to wait on a spin up event.

 

Unraid is SO different than traditional RAID, it takes a mindset change to use it to its best effect.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...