Jump to content
TexasUnraid

Why does unraid become unresponsive when moving large files? Dirty writes seem to be cause?

12 posts in this topic Last Reply

Recommended Posts

Posted (edited)

Ok, before someone says your system is not powerful enough, that is not the issue and after a lot of research I narrowed it down some.

 

So I have noticed that when I am moving large amounts of data round the system will be unresponsive for seconds at a time, sometimes longer. It gets really annoying and will even cause things to break in some cases.

 

Using netdata to track the issue I found that dirty writes from memory to disk is when this would happen. All disks (including cache pools) were completely unresponsive until the flush finished and then the system would return to normal until the next flush.

 

The really strange part is that it doesn't happen on every flush, it is generally only after the things have been working for awhile and it suddenly decides it has to completely purge all the buffers, it will stop reading and write data out of the dirty buffer until it is completely empty and then I can use the system again. Once it starts though, it happens every 30 -90 seconds it seems.

 

IOwait would also be very high during the flush but the flush would only be happening to a single disk, I could understand that particular disk being unresponsive but all disks and cache pools?

 

Is this normal? Is there a way around this? I tried messing with the dirty write percentages in tips and tweaks but it only seemed to allow be to trade off lots of short flushes vs fewer large flushes.

Edited by TexasUnraid

Share this post


Link to post
4 minutes ago, TexasUnraid said:

Is this normal?

No, but I've seen other people complaining of similar issues, I've never observed that on my servers, and I do multi terabyte moves with some frequency, one thing that will cause problems is having docker/VMs images on the array, other than that it seems some hardware/configs are more prone to these, don't know why.

Share this post


Link to post

Well good to know I am not alone, sad that the cause is not known.

 

No docker or VM's on the array, in fact all of that stuff is on a dedicated XFS formatted cache drive (multiple cache pools is fantastic!). The array and main cache pool are only for data.

 

I have been trying to narrow down the issue but have not had much luck. So far the only thing I have really been able to figure out is it seems to primarily happen when the source drive is significantly faster then the destination drive.

 

So moves from cache to array for example or from a UD drive to an array drive when not using turbo write.

 

Basically in cases where the dirty write buffer would fill faster then it empties.

Share this post


Link to post

I had similar problems. I downloaded the Tips and Tweaks community application and adjusted these settings:

 

Disk Cache 'vm.dirty_background_ratio' (%): 1 (Default 10)
Disk Cache 'vm.dirty_ratio' (%): 3 (Default 20)

 

I have 32 GB RAM, and I found the defaults would excessively cache and then try to dump way too much data at once on large copies. With these settings the reads and writes are much more steady, though you will probably see more frequent disk activity. Read the help items for these parameters in Tips and Tweaks for more info.

Share this post


Link to post

I tried 2% and 3% and while it helped it seemed to cause a lot of small flushes / unresponsiveness vs fewer larger ones.

 

Overall file performance seemed to be worse as well from lack cache. It is also nice to be able to dump a ~6gb file and it all go into memory to be written later.

 

I will try 1% and 2% though to see how that works.

Share this post


Link to post

Let me point out that 1% and 2% on a 32GB system is about the same amount of RAM as 10% and 20% on a 4GB system...

Share this post


Link to post

Do people still use 4gb systems anymore? I have not used a 4gb system for myself in a decade?

 

Seems a shame to waste the memory though even if it fixes the issue. I will try 1% and 2% though to see what happens.

 

I can confirm it only happens when the source drive is faster then the destination, Copied 8TB the other day between 2 similar speed drives and didn't have any issues.

Share this post


Link to post

As I remember, the 'delayed write' schemes were added to OS's back in the 1990's, when 386-20MHz processors were king of the hill, to help computers seem like they were very responsive to human input.  (Think of the problem of a fast typist when the characters are appearing on the screen 3 to 5 seconds after the keys are struck!)   @TexasUnraid can't believe people use systems with 4GB of RAM, I can remember when a 386 system with 1GB of RAM was the Top of the Line.  There was a time in the 1980's when I had a Unix based "multi-user capable" OS (OS9 for 6809 processor used in the RS Color Computer) running with 64KB of RAM.  And it was running off a single 5-1/4" floppy disk for both OS and user data!

 

When this came up a few years back during a Unraid discussion involving 'Out of Memory' problem, it was point out that the 10%-20% settings had not changed since those days when 1GB of RAM was standard for Linux systems!  I seem to recall that that was the RAM requirement when I first started using Unraid back in late 2011. 

 

Another memory from those 1980 days, I had a IBM PC look-alike that had capacity for 128KB of RAM on the MB.  To get to the max RAM (640KB) took a card that was about 12" long and 4" high completely filled with sixty-four 14 (or 16) pin IC sockets for 64KBx1bit memory chips. (I think I still have the mechanical chip inserter that was used to populate that board in one of my junk boxes...)

Edited by Frank1940

Share this post


Link to post
10 minutes ago, Frank1940 said:

As I remember, the 'delayed write' schemes were added to OS's back in the 1990's, when 386-20MHz processors were king of the hill, to help computers seem like they were very responsive to human input.  (Think of the problem of a fast typist when the characters are appearing on the screen 3 to 5 seconds after the keys are struck!)   @TexasUnraid can't believe people use systems with 4GB of RAM, I can remember when a 386 system with 1GB of RAM was the Top of the Line.  There was a time in the 1980's when I had a Unix based "multi-user capable" OS (OS9 for 6809 processor used in the RS Color Computer) running with 64KB of RAM.  And it was running off a single 5-1/4" floppy disk for both OS and user data!

 

When this came up a few years back during a Unraid discussion involving 'Out of Memory' problem, it was point out that the 10%-20% settings had not changed since those days when 1GB of RAM was standard for Linux systems!  I seem to recall that that was the RAM requirement when I first started using Unraid back in late 2011. 

 

Another memory from those 1980 days, I had a IBM PC look-alike that had capacity for 128KB of RAM on the MB.  To get to the max RAM (640KB) took a card that was about 12" long and 4" high completely filled with sixty-four 14 (or 16) pin IC sockets for 64KBx1bit memory chips.

lol, notice I said "still use 4gb".

 

I started out with computers back in the 90's, my first PC was a IBM 8088 with an upgraded 340kb of ram IIRC. I was also really lucky and got a 20mb hard drive at some point to add to the dual 5.25 floppies. Yeah, I was ballin. lol

 

Mostly just surprised with how cheap ram is now days, particularly ddr3 (since ddr4 doesn't come in 4gb flavors) people would still use less then 8gb. That is my baseline for a desktop.

 

Admittedly my 32gb is overkill for most things, I rarely get over 50% usage unless running VM's but I also got my ram for basically free.

Share this post


Link to post

I just updated my hardware a month ago from an Intel E6600 with 2 Gb ram (DDR2 baby!).  😎 Ran fine for a decade with unraid, but I just used it as a NAS.  Cache_dirs was literally the only program I ran (well, and pre_clear).  Now I'm with the cool kids like you with 32 Gig.

  • Like 1
  • Haha 1

Share this post


Link to post
5 minutes ago, calvinandh0bbes said:

I just updated my hardware a month ago from an Intel E6600 with 2 Gb ram (DDR2 baby!).  😎 Ran fine for a decade with unraid, but I just used it as a NAS.  Cache_dirs was literally the only program I ran (well, and pre_clear).  Now I'm with the cool kids like you with 32 Gig.

lol, fair enough.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.