Jump to content

Only using 1 disk


thany
Go to solution Solved by Kilrah,

Recommended Posts

I've an array of 5x 8TB which includes 1 parity. I've got a docker writing a big downloaded file to the array. And I've got a client (me) to download it to a pc.

 

It seems to only ever use 1 of the 5 disks for any read or write. How come?

 

There are 3 big disadvantages to this:

* Write performance could be better when using all 4.

* Read performance could be better especially while writing another file.

* Wear levelling is, well, unlevelled.

 

How can I make sure it uses all writable 4 disks?

 

To make sure I've got my bases covered:

* The shares it is using, include all disks, and exclude none.

* The shares are in "high-water" mode. I'm slightly mystified by this - there's no regular striping mode... Only modes that I shouldn't have to care about.

* The writable disks are formatted as xfs, which is default (meaning good, hopefully).

* I also have a cache SSD which is not being used, and I don't understand why. Surely files being read are also saved there, in case they are requested for reading a 2nd time? Is that not how it works? If this worked, this would help keep up performance of subsequent read, at the very least.

* Written files could royally even fit into system memory, which I'm not sure is being used for cache either. If it was, the disk wouldn't be grinding so much, I reckon.

* unRAID OS is up-to-date at 6.11.1.

Edited by thany
Link to comment

Are you sure you understand how High Water allocation works?   It is described Here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.  
 

I suspect that what you are seeing is expected behaviour, but you are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread..

Link to comment

Then what do I do to make it at least use the disks evenly?

How do I make sure it takes advantage of the presence of multiple disks from a performance standpoint?

 

As for which setting to use: "high water" is the default, and I'm always hopeful for sensible defaults. Surely, there is just one setting that is best for everyone, right? In a real RAID5 config, I also cannot (and should not!) be telling it how to do its job. Not in a way that significantly changes its behaviour anyway.

 

It feels to me that, with this configuration, (2+1)x10TB (=20TB usable) provides exactly the same performance as (5+1)x4TB (=20TB usable), because only one disk is ever being used to store any file big enough to measure performance in a meaningful way. Or, am I overlooking something?

Link to comment

Also, reading the docs, this seems to contradict itself:

 

Quote

to step fill each disk so at the end of each step there is an equal free space left on each disk

 

And:

 

Quote

Most times, only a single disk will be needed when writing a series of files

 

I'm not sure what a "step" is. Never heard of this term in the context of storage servers...

So which is it? Will it fill the disks one by one, or does it wear them evenly? It seems to suggest both, but that's impossible. It feels like it's doing an arbitrary balance between the other two methods, but it's still rather vague. And I wonder what the purpose is to have this (confusing, it turns out) option as the default method.

Edited by thany
Link to comment

Sure, I get that. It's not RAID, but that wasn't the problem at hand. The problem is that it's only using one disk, while a sensible default method would be to use all disks.

 

I just don't understand the exact purpose of this "high water" method. Actually I don't understand why anyone would have to select any option, but I guess that's a pet peeve that comes with the whole idea of not having an actual RAID.

 

And on a sidenote: I can't create shares on a pool array, can I? I haven't found how to "point" a share to anywhere but the main array...

Link to comment

How is most-free worst for performance? To me, high water is already terrible for performance. While a file is being written to the array, reading another grinds it down to maybe 8MB/s if I'm lucky. At normal speed it can do 110MB/s and that's even without having it connected to my 10Gb switch yet.

 

I would think most-free is best for performance, because given evenly filled disks, it will use all disks more-or-less simultaneously when writing many smallish files. That's great for performance! A single large file will be written to a single disk, apparently, which should perform identically in every method.

 

It also appears to mean that reading and writing any file to/from any share, will never outperform the capabilities of any one disk. In other words, with 5 disks that can do 200MB/s, I will *never* see files being read/written faster than 200MB/s, no matter how fast my network is. Correct?

Edited by thany
clarify
Link to comment

Anyway, back to the issue at hand. How can I improve performance with the (apparently) best performing method (high water), being ground to a near halt when reading and writing at the same time?

 

Cache would help, to a point, but so would levelling writes.

Or using 40TB of SSD I guess, but unfortunately my budget is not adequate enough for that :)

Edited by thany
Link to comment
  • Solution
24 minutes ago, thany said:

How is most-free worst for performance?

 

It will be worst for write performance. It would be better in the case of "reading a file from a disk while writing to another" you mention. 

On unraid writes are slow, hence the setup of using a cache SSD stuff gets written to, that you set to get moved to the array on a schedule in the middle of the night or other appropriate time where you're unlikely to be accessing the array.

 

  

24 minutes ago, thany said:

It also appears to mean that reading and writing any file to/from any share, will never outperform the capabilities of any one disk. In other words, with 5 disks that can do 200MB/s, I will *never* see files being read/written faster than 200MB/s, no matter how fast my network is. Correct?

Depends on which write mode you are, if you're in "reconstruct write" mode then writes to the array will happen at the speed of your slowest drive in the whole array, in read-modify-write it'll be less than half of the slowest between the drive that's being written to and the parity drive. 

 

Edited by Kilrah
Link to comment
  • 2 weeks later...

Thanks for explaining. Cache SSD makes more sense now. Still feels a little bit like a workaround for a problem that we created, but then again, isn't everything :)

 

It does then also make sense to me to install two cache SSDs and have them be redundant. I'm sure that's possible. But it's not really cache, is it, if files can be at some point *only* be on the cache. It's more like a prepositional storage location.

Link to comment
36 minutes ago, thany said:

Thanks for explaining. Cache SSD makes more sense now. Still feels a little bit like a workaround for a problem that we created, but then again, isn't everything :)

 

It does then also make sense to me to install two cache SSDs and have them be redundant. I'm sure that's possible. But it's not really cache, is it, if files can be at some point *only* be on the cache. It's more like a prepositional storage location.

In the latest Unraid releases, we tend to talk about Pools (which you can have multiples of) with one of their capabilities being to act as a cache for a share that ends up on the array (Use Cache=Yes).  This makes sense when you consider you can also use a pool to act as an application drive for docker containers or VMs or to host a share completely on a pool (the Use Cache=Only case).

Link to comment

Well the "problem" is a compromise needed to give Unraid its flexibility... if that's not what you want there are plenty of other OSes that use standard RAID arrays to choose from, or you can even make a ZFS pool on Unraid if you want...

 

Yes you can have redundant cache drives.

Edited by Kilrah
Link to comment

To my knowledge (which isn't saying much, to be fair) there's only one other OS that seems to be reasonably well known with a reasonably good community behind it, and thats TrueNAS. Except that one has a serious data loss problem, so to me it's poop. That leaves unRAID (for me, again) for its concise and well put together GUI making it easy for novice use, but still as good as any linux.

 

Anyway, I'm not 100% sure it's possible to create shares on a ZFS pool (on the SHARES tab that is - of course anything's possible in the terminal). So that's maybe not a super great option to explore for a primary storage array, unless I'm mistaken, which could well be.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...