Doug Eubanks Posted December 1, 2020 Share Posted December 1, 2020 I'm curious if I can do anything to increase my performance, especially during parity checks. Ryzen 2600, 48GB of RAM running 6.9.0-beta35 All drives are 10TB Iron Wolfs, except one of the dual parity drives which is a Western Digital (WD101KFBX-68R56N0). I'm running 1.5TB of RAID1 SSD cache. Everything is running BTRFS. I'm using reconstruct write, and I don't power down any of my drives. I was never able to get that to work properly, but I'm not opposed to it. SAS9211-8I 8PORT Int 6GB Sata+SAS Pcie 2.0 (flashed to IT mode, shows up as Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)) (Running 2118it) HP 468405-002 PCIE SAS EXPANDER CARD 468405-001 487738-001 (HP SAS EXP Card 2.10 -) [0:0:0:0] disk SanDisk' Cruzer Fit 1.00 /dev/sda 31.4GB [9:0:0:0] disk ATA ST10000VN0004-1Z SC60 /dev/sdb 10.0TB [9:0:1:0] disk ATA Samsung SSD 860 2B6Q /dev/sdc 1.00TB [9:0:2:0] disk ATA WDC WDS500G2B0A 90WD /dev/sdd 500GB [9:0:3:0] disk ATA Samsung SSD 860 2B6Q /dev/sde 1.00TB [9:0:4:0] disk ATA SanDisk SDSSDH35 70RL /dev/sdf 500GB [9:0:5:0] disk ATA ST10000VN0004-1Z SC60 /dev/sdg 10.0TB [9:0:6:0] disk ATA ST10000VN0004-1Z SC60 /dev/sdh 10.0TB [9:0:7:0] disk ATA ST10000VN0008-2J SC60 /dev/sdi 10.0TB [9:0:8:0] disk ATA ST10000VN0004-1Z SC60 /dev/sdj 10.0TB [9:0:9:0] disk ATA ST10000VN0008-2J SC60 /dev/sdk 10.0TB [9:0:10:0] disk ATA WDC WD101KFBX-68 0A03 /dev/sdl 10.0TB I was running a BUNCH of maintenance scripts, but someone told me that was a waste. I'm not sure what type of maintenance I need of my drives. I was running balances (weekly I believe). I'm not doing any balance on my data drives, I was told it wasn't required. /usr/bin/ionice --class idle /usr/bin/nice --adjustment=19 /sbin/btrfs balance start -musage=50 /mnt/disk1 > /dev/shm/disk1.balance.output /usr/bin/ionice --class idle /usr/bin/nice --adjustment=19 /sbin/btrfs balance start -dusage=90 /mnt/disk1 >> /dev/shm/disk1.balance.output Scrubs are running twice a month, 15 days apart...one drive a time. /usr/bin/ionice --class idle /usr/bin/nice --adjustment=19 /sbin/btrfs scrub start -Bd -c 2 -n 5 /mnt/disk4 > /dev/shm/disk4.scrub.output Cache maintenance looks like this, daily. /usr/bin/ionice /usr/bin/nice /sbin/btrfs scrub start -Bd -c idle /mnt/cache > /dev/shm/cache.scrub.output I'm also running daily smart checks at 9am. for i in {b..o}; do smartctl --test=short /dev/sd$i done If my controller/expander are my bottleneck, what is the current suggested setup? Quote Link to comment
Vr2Io Posted December 1, 2020 Share Posted December 1, 2020 (edited) Expander was 3Gbps type, if you setup in single link, then some bottleneck will occur ( count on 7 disks ) , but it won't that slow. Some hidden problem not relate above info. You need post diagnostics and better provide diskspeed test ( docker apps ) result. 57 minutes ago, Doug Eubanks said: Scrubs are running twice a month Suggest not scrubs on array disk, if I am correct it means fully write all data ( also parity ), I concern disk failure happen or data corrupt. Also, I don't think this have much benefit. I just keep 200GB space per disk and parity check monthly. From experience, slow parity check mainly because - PCIe bandwidth or link width issue - One or some disk have problem - One or some disk busy with other task. - CPU in power safe mode, clock speed too low Edited December 1, 2020 by Vr2Io 1 Quote Link to comment
Doug Eubanks Posted December 2, 2020 Author Share Posted December 2, 2020 Thanks for your reply. I paused my parity check and ran the disk speed test. My diagnostics are included as well. Thanks, Doug unraid-diagnostics-20201201-1412.zip Quote Link to comment
JorgeB Posted December 2, 2020 Share Posted December 2, 2020 Run also the controller benchmark and post the results, make sure the screenshot shows the link speed. Quote Link to comment
Doug Eubanks Posted December 3, 2020 Author Share Posted December 3, 2020 SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] Broadcom / LSI Serial Attached SCSI controller Type: Onboard Controller Current & Maximum Link Speed: 5GT/s width x8 (4 GB/s max throughput) Capabilities: storage pm pciexpress vpd msi msix bus_master cap_list rom Quote Link to comment
JorgeB Posted December 3, 2020 Share Posted December 3, 2020 While the controller is bottlenecking a little it's not the issue, please post diags during a parity check, make sure no other array activity is going on, there were some small writes to the array in the previous diags. Quote Link to comment
Doug Eubanks Posted December 3, 2020 Author Share Posted December 3, 2020 I stopped all of my VMs and all of my dockers. I ran the diags and benchmarks while the system is completely idle. I'm still trying to complete the benchmarks while parity is running, but I keep getting "speed gap" warnings and retries even though I checked the box to ignore speed gaps. unraid-diagnostics-completely idle.zip Quote Link to comment
Doug Eubanks Posted December 3, 2020 Author Share Posted December 3, 2020 I was unable to complete the benchmarks on parity drive 2, so I canceled the parity check. Quote Link to comment
JorgeB Posted December 4, 2020 Share Posted December 4, 2020 10 hours ago, Doug Eubanks said: I'm still trying to complete the benchmarks while parity is running Why? That won't give accurate results. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.