6.8.3 Disk writes causing high CPU


Recommended Posts

  • Replies 210
  • Created
  • Last Reply

Top Posters In This Topic

you have to set your system share and appdata share to prefer. 

 

Then switch off docker. 

Run the mover. 

 

Once it is finished, verify the following way: 

ls -al /mnt/cache/system/docker/docker.img exists ? 

and it doesn't exist at all on your normal disk drives! 

 

so i.e. ls -al /mnt/disk1/system/docker/docker.img doesn't exist. 

 

you can as well copy the system directory with i.e. "mc" directly to /mnt/cache/ 

 

Once you have done that and there is no more "system" and no more appdata directory on any of your /mnt/disk* directories (except for /mnt/user  - where all your devices in the pool are merged ) 

 

start your docker again. See if you have a difference in your system performance. 
I still have at least 1 core busy with "chefs" when i.e. plex is doing a scan - or anything else similar is happening - but the system stays responsive and is not totally drowning in IO . 

 

 

 

Link to comment

ok i just read the beginning of the thread again. 

 

Basically you have made sure already your "docker.img" is now inside the cache.

The same should be done for the "appdata" shares as it contains all the data of your docker images. 

 

Next issue would be : 
SAB shouldn't impact anything else then the drive you chose to dedicate to it. 

That being said, in my setup, where i also heavily rely on sab, it is a very potent piece of software with extremely high IO demand. 

I have setup a separate box with a 4 x 2TB raid10 as download server with its / on SSD. 

 

In the earlier days, the box was unpacking on the SSD - however i quickly run out of space. 

With the 4 x 2TB disks  - SAB is easily downloading faster then i can unpack - and the download box is fully IO - Saturated. 

Depending on what performance levels you might want to achieve - you might have to migrate your SAB to an additional SSD. 

 

Link to comment

I verified it, I don't have /appdata/ or /system/ on any of the physical disks, only on the cache drive.

 

I also put the folder Sab downloads to on a cache only share and my problem persists.

 

For me, basically it's any time I'm doing file operations of size on the cache drives that I have this issue. I guess I could split the pool and have a single cache drive formated to XFS. My Optane card is XFS and it doesn't exhibit the issue. However, I'd kinda like to have it work the way it's supposed to instead. I have a pool so that everything is redundant.

 

Link to comment

Since this seems like something that's not going to be fixed for a while, could someone help me understand the correct way to split my cache pool and then make use of a single drive formatted to XFS?

I'm assuming I'd
1) stop & disable docker and vm

2) change the system, appdata and domains shares to array

3) run mover

4) set array to not auto-start

5) reboot

6) unassign one of the pool drives and then format the remaining one to XFS

7) start the array

8 ) stop & disable docker and vm

9) change the system, appdata and domains shares to cache only

10) run mover

 

?

Link to comment

Nearly right :)

 

3) change the Use cache setting to Yes (needed to move files from cache to array)

6.1) change the number of cache slots to 1 (to stop you being forced to use BTRFS) 

6.2) change the desired format for the remaining cache drive to XFS.  You do not format it at this stage.

7.1) Format the cache drive (which will be showing as unmountable)

8) unnecessary - you did this in step 1)

9) change the Use cache setting to Prefer (needed for files to move from array to cache)

 

11) (optional) change the Use cache setting from prefer to Only
12) re-enable docker & VM services

 

Link to comment

Thanks for the reply, two questions:

 

In your "3)" are you addressing what I have in my "2)" ? as in , change it to "yes" under cache usage?

 

Where will 6.2 take place, in the settings?

 

ALSO... just noticed something and I'm not sure if this matters, but my default file system is set to "XFS" and all my array drives are set to that. Would that possibly be a factor here?

Link to comment
6 minutes ago, CowboyRedBeard said:

Thanks for the reply, two questions:

 

In your "3)" are you addressing what I have in my "2)" ? as in , change it to "yes" under cache usage?

 

Where will 6.2 take place, in the settings?

 

ALSO... just noticed something and I'm not sure if this matters, but my default file system is set to "XFS" and all my array drives are set to that. Would that possibly be a factor here?

Yes, my 3) should be 2)

 

6.2) this is done by clicking on the disk on the Main tab and on the resulting dialog you have the option to explicitly set the format you want.

the defaults only come into play if you do not explicitly set a clue on an individual disk.   On cache drives you are forced to use btrfs if there is more than one drive in the pool.   Not sure what the default is for a single drive, but I would recommend setting it explicitly to play safe (especially as you want to change a btrfs formatted drive to XFS).

 

Link to comment
10 minutes ago, testdasi said:

That's like asking MacOS to throw away its kernel and use Linux kernel instead. SHFS = Unraid.

 

You probably meant replacing BTRFS with ZFS.

 

No. 
 

If you would look at their latest blog - and the video that was posted there - you would see that they are indeed considering ZFS. 

And i can tell you from my analysis - of the SHFS processes via strace and its behavior, that SHFS in itself has big issues performance wise. 

Guess why plugins like the directory cache and others exist. ZFS is the superior file system and it has decent Caching (block wise) build in amongst other features such as i.e. Snapshots, Raid etc. ..  
So imagine we would have the performance of XFS with the flexibility of BTRFS and snapshots and something like "dm-cache" build in. 
But all with the nice interface of Unraid and the easy handling of Docker Containers and VMs etc ... 

 

With ZFS on top, multiple pools wouldn't be an issue. Not to mention the amount of attention that any BUG in ZFS has in the worldwide community. 
Whereas if there is a serious bug in SHFS - its only in the hand of a few - and writing a filesystem is a very sophisticated task that needs a lot of time and resources. 

We would all profit from it - as well as LT.

vid in question : https://unraid.net/blog/upcoming-home-gadget-geeks-unraid-show

 

Edited by ephigenie
Link to comment

  

20 minutes ago, ephigenie said:

 

No. 
 

If you would look at their latest blog - and the video that was posted there - you would see that they are indeed considering ZFS. 

And i can tell you from my analysis - of the SHFS processes via strace and its behavior, that SHFS in itself has big issues performance wise. 

Guess why plugins like the directory cache and others exist. ZFS is the superior file system and it has decent Caching (block wise) build in amongst other features such as i.e. Snapshots, Raid etc. ..  
So imagine we would have the performance of XFS with the flexibility of BTRFS and snapshots and something like "dm-cache" build in. 
But all with the nice interface of Unraid and the easy handling of Docker Containers and VMs etc ... 

 

With ZFS on top, multiple pools wouldn't be an issue. Not to mention the amount of attention that any BUG in ZFS has in the worldwide community. 
Whereas if there is a serious bug in SHFS - its only in the hand of a few - and writing a filesystem is a very sophisticated task that needs a lot of time and resources. 

We would all profit from it - as well as LT.

vid in question : https://unraid.net/blog/upcoming-home-gadget-geeks-unraid-show

 

"Considering ZFS" is not the same as "replacing SHFS with ZFS".

And I have not expressed any doubt on the fact that SHFS has performance limitation - because I know and implemented workarounds for the limitation.

My understanding is this consideration is for the cache pool. SHFS is still the engine behind the array (and shares).

 

Whatever "imagine" you want to do, you can't change the fact that shfs = Unraid so the chance that Limetech would abandon it completely is rather low.

Then you also ignored how integrated ZFS pooling is with its underlying file system, which is RAID-based. Selectively implementing the pooling on a different (non-RAID) file system, and then add parity calculation on top, is not going to be simple (assuming no performance penalty).

I'm not saying whether it is possible or not possible to do, nor whether it should or should not be done.

I'm saying that the SHFS-less ZFS-based product you are imagining is not going to be called "Unraid".

 

And I'm completely ignoring the fact that switching to XFS seems to resolve the issue that the OP is seeing - suggesting the bug is with integrating BTRFS RAID pool into Unraid and not with SHFS.

(and directly accessing /mnt/cache to bypass SHFS has been a known workaround for quite some time).

Link to comment

Guys.... Settle down. Let's not pollute the thread with arguments about which is the best file system, a discussion that could only take place somewhere like here of course🤣

 

Back to my problem... I see what I've done here with the single drive on XFS as a work around. How can I get back to a cache pool with the same performance?

Link to comment
21 hours ago, CowboyRedBeard said:

Guys.... Settle down. Let's not pollute the thread with arguments about which is the best file system, a discussion that could only take place somewhere like here of course🤣

 

Back to my problem... I see what I've done here with the single drive on XFS as a work around. How can I get back to a cache pool with the same performance?

You can try creating a raid1 / raid0 out of your 2 SSD's and putting XFS on top. 

Everyone please read: 

https://en.wikipedia.org/wiki/ZFS ... 

- uptown triple-device parity per pool

- of course multiple pools per host

- builtin encryption

- live extension 

- builtin deduplication

- builtin hierarchical caching

( L1 Ram, L2 i.e. SSD ) blockwise and without possible data loss if the cache device dies

with cache devices being able to be added and removed live

and separate cache for fast write confirmation ( SLOG) 

- builtin "self-healing" 

- Snapshots... 


Only downside pools cannot be easily downsized. 

 

Really in short 98% of everything we dream about. 

 

I for myself like the comfort of the interface, VM and Docker handling, Ease of configuration of NFS, SMB etc 

And virtually  none of that would fall apart. 

Filesystems are hugely complex beasts. And the amount of Forum entries here that are connected to performance issues of SHFS / mover are really a lot. 
 

 

Edited by ephigenie
Link to comment
1 hour ago, ephigenie said:

- uptown triple-device parity per pool

This requires all data devices to me members of the same pool, it prevents the full utilization of different size devices and if more disks fails than current redundancy you lose the whole pool, for that you already have FreeNAS, one of the main advantages of Unraid is every data disk being an independent filesystem, even if LT supports ZFS in the future, and I hope they will, array devices will still be an independent filesystem, i.e. without zfs redundancy, redundancy would still be done by parity, zfs would be nice mostly for use with cache pool(s), and shfs is still needed to make them part of the user shares.

 

 

Link to comment
8 hours ago, johnnie.black said:

This requires all data devices to me members of the same pool, it prevents the full utilization of different size devices and if more disks fails than current redundancy you lose the whole pool, for that you already have FreeNAS, one of the main advantages of Unraid is every data disk being an independent filesystem, even if LT supports ZFS in the future, and I hope they will, array devices will still be an independent filesystem, i.e. without zfs redundancy, redundancy would still be done by parity, zfs would be nice mostly for use with cache pool(s), and shfs is still needed to make them part of the user shares.

 

Well in parts i can agree - individual filesystems are an advantage.

Unfortunately what i have seen while debugging shfs:  is its highly unefficient.
This and then along with the "mover" is causing a lot of issues.

I get, that the overhead in IO is caused by being extra-cautios and double check everything.

However since the approach of array configuration is not extendable during live operation as well as cache...

The "only" configurable thing that is actually causing most of the confusion are the settings around the cache.

And the involvement of the mover.

To this part i would not understand why i.e. its not possible to make that a transition process, whereas upon all criterias are being fullfilled,

a "progress - meter" can show the status of the transition.

i.e. I change a share from "prefer" to "cache only" : nothing happens in terms of the mover.  ( isn't that unexpected ? ) ? Given that the others :

i.e. i change a share from "use cache" to "prefer" : the data will be copied from the array to the cache. But based on what pattern ? MRU ? LRU ?
 

Specially the case of  a share being converted to be "cache only" has some of the biggest problems in terms of workflow.
Why not stop VM's & Docker and trigger a move by some i.e. rsync based tool on the shell ?

 

Later on - even your share is still "cache only" SHFS triggered by the mover will still insist on over and overly seeking the share's FS on ALL

disks. Something that definitely would need to be avoided in order to give the cache some sort of decent performance.
Otherwise the disks that are supposed to be relieved are still needed for every run.
And i validated this with strace ........

 

In terms of Cache & ZFS :

Why would i not prefer having i.e. snapshots or a block wise cache ?

I think there is almost no reason.

 

in terms of different HDD sizes on ZFS :

no issue at all.
multiple arrays are possible as well.

Link to comment
Why would i not prefer having i.e. snapshots or a block wise cache ?

I think there is almost no reason.

Btrfs also has snapshots and many of the same features as zfs, it's also much more flexible, i.e., multiple size devices can be used fully, raid profile can be changed without destroying the pool and new devices can be easily added or removed, on the other hand no doubt zfs is much more reliable.

 

in terms of different HDD sizes on ZFS :

no issue at all.

Yes, you can use them but all larger devices will have the extra capacity unused.

 

Well in parts i can agree - individual filesystems are an advantage.

Unfortunately what i have seen while debugging shfs:  is its highly unefficient.

It has its issues, but there's no Unraid without shfs, or something similar.

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.