Jump to content

Slow transfer to cache drive


Popple2000

Recommended Posts

Hey guys, as the title implies, i have abysmal transfer speeds from my desktop to my server (cache drive installed and being utilized).

This has been an issue for a long time now... but I've finally had enough...

The server is in the basement connected to an unmanaged 8 port gigabit switch with cat6 as is the rest of the house.

I have recently swapped the cache ssd incase that was the issue and I have also added a LSI 9211-8i just incase it was one of the old pci cards that was the issue.

Cache drive is connected to a sata port on the mobo.

I don't know what else to do...

If any other information is needed, please don't hesitate to ask.

any ideas?

 

Diag attached,

Thanks

 

xfer.png.74a35d259e386178e5fd24dba1dd0621.png

 

diagnostics-20171005-1610.zip

Link to comment
2 minutes ago, johnnie.black said:

You have the trim plugin installed but it didn't ran during the time the syslog captures, make sure it's scheduled to run regularly, also check this post or most free space may not get trimmed:

 

 

 

Thanks for the reply Johnnie,

Trim is set to run Sunday morning at 0630 and the server has only been up for 21 or so hours, could that be why you don't see it being ran?

The link you mention looks like it's aimed mainly at cache pools? I only have a single drive.. does it still apply to me? 

 

Thanks

Link to comment
4 minutes ago, johnnie.black said:

It applies to any btrfs filesystem, the problem is that allocated space is not trimmed, and this can be most free space like in your case, it doesn't matter if it's a single or multi device pool.

It's been the same issue since the drive was new... same as the old cache drive.

Is there a command I can send through putty that will invoke a manual trim so that I can try another transfer right now?

Thanks

 

Edit:

Linux 4.9.30-unRAID.
Last login: Thu Oct  5 15:29:23 -0400 2017 on /dev/pts/0 from 192.168.0.17.
root@Beafy:~# fstrim -v /mnt/cache
/mnt/cache: 39.2 GiB (42101694464 bytes) trimmed

 

Does that mean it ran Trim?
 

Link to comment
8 minutes ago, Popple2000 said:

Does that mean it ran Trim?

 

Yes, looking at the bytes trimmed almost all free space was trimmed, so the link probably won't help, though it won't hurt for sure.

 

Need to investigate why in most cases only unallocated space is trimmed but some cases all free space is trimmed...

Link to comment
1 minute ago, johnnie.black said:

 

Yes, looking at the bytes trimmed almost all free space was trimmed, so the link probably won't help, though it won't hurt for sure.

 

Need to investigate why in most cases only unallocated space is trimmed but some cases all free space is trimmed...

So, I have 60g used and 70g free... where is the 39.2g coming from? My docker container? it's 42g

I am confused...

Link to comment
6 minutes ago, Popple2000 said:

So, is there something else wrong?

 

Did you delete something after downloading the diagnostics?

 

6 minutes ago, Popple2000 said:

How would I go about doing that?

 

You can use midnight commander on the console (mc) and copy one or more large files from a hard disk and compare speed, e.g., copy from /mnt/disk1 to /mnt/cache

Link to comment
26 minutes ago, johnnie.black said:

 

Did you delete something after downloading the diagnostics?

 

 

You can use midnight commander on the console (mc) and copy one or more large files from a hard disk and compare speed, e.g., copy from /mnt/disk1 to /mnt/cache

I may have ran the mover and not let it finish before downloading the diag.. I have attached a fresh diag.

 

I used MC to xfer a 6g file from disk7 to the cache drive and it started at 40MB/s and quickly went up to and finished at 100.6 MB/s

 

My Desktop xfers are now also bouncing between 70 and 90 MB/s

 

Is it possible my weekly TRIM just hasn't been running?

 

PS..  fstrim -v /mnt/cache - should I add a (-a) in there like the plugin does?

 

diagnostics-20171005-1727.zip

Link to comment

Current diags show much less used space, it also means that only the unallocated space was trimmed, so you should do the balance I linked earlier.

 

9 hours ago, Popple2000 said:

PS..  fstrim -v /mnt/cache - should I add a (-a) in there like the plugin does?

 

You can, I would recommend to schedule the plugin to run daily, but you'll also need to keep an eye on the unallocated space, thread I linked earlier has more info.

 

Also, an since you're using a single device, and if snapshots and checksums are not a priority, you can also convert your cache to xfs, it's lower maintenance at least for now, though btrfs behavior should improve once you get to kernel 4.14

Link to comment
2 hours ago, johnnie.black said:

Current diags show much less used space, it also means that only the unallocated space was trimmed, so you should do the balance I linked earlier.

 

 

You can, I would recommend to schedule the plugin to run daily, but you'll also need to keep an eye on the unallocated space, thread I linked earlier has more info.

 

Also, an since you're using a single device, and if snapshots and checksums are not a priority, you can also convert your cache to xfs, it's lower maintenance at least for now, though btrfs behavior should improve once you get to kernel 4.14

 

Thanks for all the help!

 

I ran a balance last week, but at the end it still said "no balance found" or something like that in the gui (im not at home atm, can update later)

I will change the plugin to run daily as recommended.

I made it btrfs because I had planned on adding a second drive for a pool (btrfs needed for a pool if im not mistaken), however im feeling 128 was too small.. so I was planning on getting 2x 250g evo's the next time i see them on sale.

 

Thanks again, all the tips are appreciated!

 

Link to comment
22 minutes ago, Popple2000 said:

I ran a balance last week, but at the end it still said "no balance found"

 

That's normal after the balance is finished.

 

The purpose of balancing in this case is to reduce the unallocated free space, from your last diags:

 

Quote

Label: none  uuid: d13e1497-9684-4fcc-8453-fb33ef211a2b
    Total devices 1 FS bytes used 54.80GiB
    devid    1 size 119.24GiB used 79.03GiB path /dev/sdf1

 

You only have 54.8GiB used, but 79.03GiB allocated, trim will only run on the free unallocated space, 119.24-79.03, ~40GiB, so the remaining free space will remain untrimmed, running a balance like so will recover most of the unallocated space:

 

btrfs balance start -dusage=75 /mnt/cache

I have this balance running weekly on my cache device, this keeps the allocated space in control and close to the used space.

Link to comment
3 hours ago, johnnie.black said:

 

That's normal after the balance is finished.

 

The purpose of balancing in this case is to reduce the unallocated free space, from your last diags:

 

You only have 54.8GiB used, but 79.03GiB allocated, trim will only run on the free unallocated space, 119.24-79.03, ~40GiB, so the remaining free space will remain untrimmed, running a balance like so will recover most of the unallocated space:

 


btrfs balance start -dusage=75 /mnt/cache

I have this balance running weekly on my cache device, this keeps the allocated space in control and close to the used space.

 

Ok, thats good to know!

Just to confirm, I should run "btrfs balance start -dusage=75 /mnt/cache" on my server, correct?

is there an easy way to set that up to run every week?

 

Thanks again!

Link to comment
2 minutes ago, johnnie.black said:

 

Yes

 

 

I use the user scripts plugin.

This was the output...


Linux 4.9.30-unRAID.
Last login: Thu Oct  5 16:48:20 -0400 2017 on /dev/pts/0 from 192.168.0.17.
root@Beafy:~# btrfs balance start -dusage=75 /mnt/cache
Done, had to relocate 24 out of 80 chunks

 

I am assuming this is ok? im not really sure what it means.. 

 

I just installed user scripts, I'll go have a peek at that now.

 

Thanks!
 

Link to comment
1 minute ago, johnnie.black said:

 

It's the normal output, you can check the results with:

 


btrfs fi show /mnt/cache

 

 

 


This is what I got 


root@Beafy:~# btrfs fi show /mnt/cache
Label: none  uuid: d13e1497-9684-4fcc-8453-fb33ef211a2b
        Total devices 1 FS bytes used 54.87GiB
        devid    1 size 119.24GiB used 57.03GiB path /dev/sdf1

root@Beafy:~#

 

Also, for the script..

 

#!/bin/bash
btrfs balance start -dusage=75 /mnt/cache

 

I can just leave it like that and set it to run weekly?

 

Thanks

Link to comment
Just now, Popple2000 said:

This is what I got 


root@Beafy:~# btrfs fi show /mnt/cache
Label: none  uuid: d13e1497-9684-4fcc-8453-fb33ef211a2b
        Total devices 1 FS bytes used 54.87GiB
        devid    1 size 119.24GiB used 57.03GiB path /dev/sdf1

 

OK, now you have 54.87 used and 57.03 allocated, before you had 54.80 used and 79.03, so much better.

 

2 minutes ago, Popple2000 said:

I can just leave it like that and set it to run weekly?

 

Yes, that's all it needs.

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...