Jump to content

Erratic write speeds


dexdiman

Recommended Posts

Currently my write speeds are inconsistent and all over the place. Over the course of 5 seconds or so the speed will ramp up to 90-100mbs then slowly plummet to 0-30mbs. After a couple seconds it will speed back up to 90-100mbs then drop back down and sometimes even stopping altogether. After a few minutes of this speedup/slowdown the system will "calm down" and speeds will only fluctuate between 0-40mbs, but the speed is still constantly fluctuating and never holding a speed for more than a second or two.

 

Log file ->https://drive.google.com/file/d/0B0NhjHYX6RMOTFBjdDhBZHdFbjA/view?usp=sharing

58aad7be9ca38_2015-12-21_06_14_22-28_complete.png.d07cc74c1f6bd8cb5ef6b0ebc1b194f9.png

2015-12-20_09_27_20-1_Running_Action.png.72e71ab855798917e59d1df940cbf4ca.png

58aad7bea155e_2015-12-20_13_41_30-95_complete.png.d34a41bfde7adb294196e8e82e423f82.png

Link to comment

This erratic speed is new. A couple months ago I upgraded to v6 of unraid and upgraded to a SATA III card, but before this the speeds where much more stable at 30-40mbs. It could be the system caching to RAM then to the array, but wouldn't that kind of speed variation persist through the entire write process? About half way through the process it only peaks at 30-40mbs, but mostly hovers around the teens. I understand with unraid you swap write speed for reliability but that still seems low.

 

 

Link to comment

Those speeds look indeed very low, first thing I would do is try to isolate the problem to Unraid or network.

 

To test the network, and if you have at least 4GB of RAM type this:

 

sysctl vm.dirty_ratio=95

 

Unraid will now use 95% of your free RAM to cache the write instead of the default 20%, so try copying a large file and if the network is working OK speed should be at max for much longer than before.

 

Rebooting will change back to default or type:

 

sysctl vm.dirty_ratio=20

 

 

To test raw write speed, use the attached script, it was posted by WeeboTech on another thread, it will write a 10GB test file to any disk.

 

Example for disk1:

 

Write_speed_test.sh /mnt/disk1/test.dat

 

write_speed_test.zip

Link to comment
To test the network, and if you have at least 4GB of RAM type this:

 

Code: [select]

 

sysctl vm.dirty_ratio=95

 

 

Unraid will now use 95% of your free RAM to cache the write instead of the default 20%, so try copying a large file and if the network is working OK speed should be at max for much longer than before.

 

Rebooting will change back to default or type:

 

Code: [select]

 

sysctl vm.dirty_ratio=20

 

I ran the first script and I was getting 114mbs solid for a little bit then it plummeted to 5-10mbs (See write.png) I left it like that for a few minutes and it didn't fluctuate really that much. Maybe a meg or two. I used a single 40GB .mkv file to test.

 

To test raw write speed, use the attached script, it was posted by WeeboTech on another thread, it will write a 10GB test file to any disk.

 

Example for disk1:

 

Code: [select]

 

Write_speed_test.sh /mnt/disk1/test.dat

 

Disk1: Avg: 40-50mbs

Disk2: Avg: 35-40mbs

Disk3: Avg: 30-35mbs

Disk4: Avg: 40-50mbs

Disk5: Avg: 30-35mbs

Disk6: Avg: 30-40mbs

Disk7: Avg: 40-45mbs

Disk8: Avg: 40-45mbs

Disk9: Avg: 55-60mbs

 

I did notice a trend. All the drives started of well over 100mbs which I am assuming is the RAM cache then they all slowly dropped down into there averages, and none of them dropped below 30mbs.

write.png.688b08170ee1ae63a95eca3d8c858ebb.png

Link to comment

That is strange, it appears your LAN is fine, it kept full speed for several seconds while Unraid was caching to RAM, but your disk speeds also look normal during the write test, don’t know if you’re using plugins / dockers but you can try booting in safe mode and stopping all dockers / vms and see if it makes any difference.

Link to comment

Sorry for the late reply, I was out of town for Christmas.

 

The only plugins I am using are the default ones that come pre-installed, and the only dockers I have installed are Plex and Plex request.

 

I booted into safemode and disabled everything I could and the overall speed was slower and didn't peak as high as normal, but it didn't drop below 30mbs write. I booted back into normal mode and the speed was hovering around 40-50mbs but I noticed after every write to RAM the speed would drop to 1-2mbs ish for a few seconds then build back up to around 40mbs.

 

It's better than it was a few days ago but it still very erratic write speeds.

Link to comment

It's better than it was a few days ago but it still very erratic write speeds.

 

While doing some other tests I may have found the cause of your erratic speeds, I don’t know if this is limited to v6.1.6 but it appears that copying to Reiserfs disk will cause these variations in speed, see screenshots below, I copied the same folder to the same disk, only thing different is the file system in use, ReiserFS or XFS.

 

I see from your diagnostics that disk9 is XFS, do you see the same erratic speeds if you copy to that disk?

rfs.png.ed5ef41bc561895b01871fe5308f8543.png

xfs.png.350df92bb3547e2a99e7a2e96a3d6458.png

Link to comment

While doing some other tests I may have found the cause of your erratic speeds, I don’t know if this is limited to v6.1.6 but it appears that copying to Reiserfs disk will cause these variations in speed, see screenshots below, I copied the same folder to the same disk, only thing different is the file system in use, ReiserFS or XFS.

If a Reiserfs disk gets near full (> 95%) then file creation can slow down significantly.  Reads do not slow down in the same way.

Link to comment

If a Reiserfs disk gets near full (> 95%) then file creation can slow down significantly.  Reads do not slow down in the same way.

 

Yes, I know that, this test was done copying to an empty disk.

 

Also note that the total copy time was similar, just the speed was very irregular with reiser, going from 5 to 60MB/s, while using XFS it was always constant, except the initial RAM caching.

 

Total copy time for a folder with ~20GB, mostly large files:

 

ReiserFS – 11m16s

XFS – 10m58s

 

Link to comment

If a Reiserfs disk gets near full (> 95%) then file creation can slow down significantly.  Reads do not slow down in the same way.

 

Yes, I know that, this test was done copying to an empty disk.

 

Also note that the total copy time was similar, just the speed was very irregular with reiser, going from 5 to 60MB/s, while using XFS it was always constant, except the initial RAM caching.

 

Total copy time for a folder with ~20GB, mostly large files:

 

ReiserFS – 11m16s

XFS – 10m58s

 

 

I have confirmed this on 6.1.4 and 6.1.6. I copied a 40gb file to an empty XFS drive and the file copied at a solid 50mbs and maybe fluctuated 2mbs. Then I copied the same 40gb file to an empty ReiserFS and it fluctuated all over the place.

 

Should I change all the drives to XFS or is there a better file system I should use?

Link to comment

I changed all my disks to XFS mainly because of how slow Reiserfs is when disks are almost full, I would wait several seconds for a copy to begin and sometimes even get timeouts, this all went away with XFS.

 

Don’t know if it’s worth changing just because of the erratic speeds, if like in my test the actual copy time is similar, but for any new disk I would use XFS only.

 

Link to comment

I think I might make the transition to XFS anyway, because transferring the 40GB file to the XFS disk took about 10 minutes were as copying the 40GB file to the ReiserFS disk took almost 30 minutes.

 

I am guessing the process is pretty straight forward.

 

1. Move data off drive to another drive using rsync

2. Stop array and set drive to XFS

4. Start the array and format the drive as XFS

5. Move data back onto newly formatted drive

Link to comment

I think I might make the transition to XFS anyway, because transferring the 40GB file to the XFS disk took about 10 minutes were as copying the 40GB file to the ReiserFS disk took almost 30 minutes.

 

I am guessing the process is pretty straight forward.

 

1. Move data off drive to another drive using rsync

2. Stop array and set drive to XFS

4. Start the array and format the drive as XFS

5. Move data back onto newly formatted drive

 

Couple points. No need to move, a copy is quicker, as only one drive is being written, and the subsequent format to XFS erases the file system quite efficiently  :). Recommend using checksum verification with the copy, or checksumming source and destination and compare, so you know the copy actually snagged everything accurately.

 

Now is also a good time to clean house and verify backup routines. Ideally you should be able to restore your backup to the XFS drive, and compare it to the original untouched RFS drive, then proceed with formatting the RFS drive.

Link to comment
  • 2 weeks later...

I thought I would update my findings so far for anyone curious. I am about 2/3 of the way through converting my file system from Reiserfs to XFS and I can say the performance has improved dramatically. Going from 0-30mb erratic write speed, mostly hovering in the teens, to a very stable and fast 50+mbs write speeds. The only real down side to this process is the time it takes to move content from one hard drive to another. It's about and hour/hour-and-a-half per 100gig. Other than that it's pretty easy and simple.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...