Jump to content

JorgeB

Moderators
  • Posts

    61,942
  • Joined

  • Last visited

  • Days Won

    652

Everything posted by JorgeB

  1. Played a little with the new par2 creation, working great, many thanks! I know this is a very early version but is it possible to save the settings like recursive and redundancy %?
  2. So start the array with the disk to be precleared so it builds "disk.cfg", stop the array, remove the drive from array, build "new config", then preclear_disk.sh should work? I just tried it to make sure and in fact you only need to do a new config, it creates the disk.cfg
  3. Disk.cfg needs to exist, you can create a one disk array on the test server and then do a new config so the disk becomes available to preclear, or copy the disk.cfg from you other server.
  4. Agree, 100MB/s is a good average speed and what I shoot for, I try to keep my parity checks under or just over 10 hours so I can run them overnight and they don't slow down the servers during the day. Also electricity here is half price during the night so that's another bonus.
  5. From the test you posted earlier I believe they are: Parity: HGST HDS724040ALE640 PK1331PAKGDJRS 4.0 TB 129 MB/sec avg - 7.2k 800GB/platter Disk 1: HGST HDN724040ALE640 PK1334PBGWWV2X 4.0 TB 128 MB/sec avg Disk 2: HGST HDN724040ALE640 PK2334PBHBMBYR 4.0 TB 129 MB/sec avg Disk 3: Hitachi HDS724040ALE640 PKW331P1GWE06Z 4.0 TB 138 MB/sec avg Disk 4: WDC WD40EFRX-68WT0N0 WD-WCC4E0719150 4.0 TB 120 MB/sec avg - 5.4k 1TB/platter Disk 5: HGST HDS724040ALE640 PK1331PAJYLN9X 4.0 TB 138 MB/sec avg Disk 6: Hitachi HDS5C4040ALE630 PL1311LAG16EKA 4.0 TB 110 MB/sec avg - 5.4k 800GB/platter Disk 7: HGST HDS724040ALE640 PK1331PAKL3D5S 4.0 TB 136 MB/sec avg Disk 8: Hitachi HDS723030ALA640 MK0311YHG72PZA 3.0 TB 112 MB/sec avg - 7.2k 600GB/platter Disk 9: Hitachi HDS723030ALA640 MK0311YHGAVDHA 3.0 TB 117 MB/sec avg Disk 10: Hitachi HDS723030ALA640 MK0311YHGBXJVA 3.0 TB 114 MB/sec avg Disk 11: Hitachi HDS723030ALA640 MK0311YHGAV8VA 3.0 TB 81 MB/sec avg Disk 12: WDC WD30EFRX-68AX9N0 WD-WMC1T1802837 3.0 TB 112 MB/sec avg - 5.4k 1TB/platter Disk 13: WDC WD30EFRX-68AX9N0 WD-WMC1T2444427 3.0 TB 110 MB/sec avg Disk 14: WDC WD30EFRX-68AX9N0 WD-WMC1T2756512 3.0 TB 123 MB/sec avg Disk 15: WDC WD30EFRX-68AX9N0 WD-WCC1T0580475 3.0 TB 121 MB/sec avg Disk 16: WDC WD30EFRX-68AX9N0 WD-WCC1T1266169 3.0 TB 123 MB/sec avg Disk 17: WDC WD30EFRX-68AX9N0 WD-WCC1T0596779 3.0 TB 130 MB/sec avg Disk 18: WDC WD40EFRX-68WT0N0 WD-WCC4E1718782 4.0 TB 120 MB/sec avg - 5.4k 1TB/platter Disk 19: Hitachi HDS724040ALE640 PK2331PAHEGLGT 4.0 TB 138 MB/sec avg Disk 20: HGST HDS724040ALE640 PK1381PAKEULGS 4.0 TB 139 MB/sec avg
  6. I never saw those results on my servers. On which controllers do you see these results? Onboard SATA ports? I get 30-40 MB/sec lower results on the whole line. What could be the bottleneck on my Main server? If there is any. Those results are disk dependent, especially because the disks are tested one at a time, any sata2 or above controller should be capable of 250MB/s+. Few disks on the market today can do 200MB/s+, these two are the exception from the ones I have, mostly WD green drives.
  7. Of the disks I currently have I’m very impressed with the 8TB Seagate, just hope they are reliable as Seagate is not usually my first choice. The 3TB Toshiba is also a good performer, both can do ~200MB/s in the outer cylinders.
  8. Although the 1430SA uses a Marvell chipset it does not use the same MVSAS driver as the SASLP and SAS2LP, I believe the increase in performance on your older system using the tweak was more from lower CPU utilization and less from any benefit to the 1430. The 1430SA has been a very consistent performer since unraid V5 and always delivers more than 200MB/s per disk, which is consistent with its max theoretical bandwidth of 1000MB/s. Different hardware and tunable settings can make a small difference but doubt that anyone with a Intel socket 1155 or above CPU will see any gains from the changes as the card is performing at maximum speed as is. Average parity check with 4 SSDs - Intel G1620 2.7Mhz V5.0.6 V6.0.0 V6.0.1 V6.1.0 V6.1.1 V6.1.2 V6.1.3 V6.1.3 with nr_r=8 213.4 214.9 213.4 213.4 213.4 214.9 214.9 209.3 AMD users may see some difference as speed is not so consistent: Average parity check with 4 SSDs - AMD X2 4200+ 2.2Mhz V5.0.6 V6.0.0 V6.0.1 V6.1.0 V6.1.1 V6.1.2 V6.1.3 180.6 200.1 200.1 181.9 196.4 180.9 201.4 Unfortunately my AMD board is not working at the moment so can’t test with the tweak.
  9. Latest bios is 2507, you can download it from the adaptec website and flash it. http://www.adaptec.com/en-us/downloads/bios_fw/bios_fw_ver/productid=aar-1430sa&dn=adaptec+serial+ata+ii+raid+1430sa.html
  10. I recommend the adaptec 1430SA, relatively cheap on ebay and works very well with Unraid, just make sure it’s using latest firmware for > 2tb support.
  11. Just to report that this worked perfectly for me to consolidate my TV series by season on the same disk, it took about 48 hours to complete, can’t even imagine how long it would take by hand. Thanks again.
  12. That cable will work, have the exact same one, although if that price is in euros I think it’s somewhat high.
  13. +1 I’ve also been using corz, very happy to be able to create/verify checksums without using a windows pc. Many thanks!
  14. Is there a H320 or is this a typo? Before flashing the LSI (Pxx) firmware you have to flash the DELL IT firmware! Did you do this successfully? H320 was a typo - I corrected it. Yes, I flash the Dell It firmware first and I can do it successfully - I will attach flashlog for this job and pictures for what it looks like after Dell It flash. My sas address is 590b11xxx - do I perhaps have to change it to a Lsi address ? Have you tried the simpler procedure posted by opentoe? Worked great for me.
  15. Many thanks to Freddie for these scripts, I didn’t plan my split levels as I should and I would spend days trying to do this by hand. Thanks as well to Benni-chan, I have over 1000 tv folders to consld8 and that helps a lot. Is it possible to output the test run for de line below to a text file? find "/mnt/user/TV" -mindepth 2 -maxdepth 2 -type d -print0 | xargs -0 -n 1 consld8 -t
  16. Upgrade my friends other server using the plugin and same thing happened, pen didn’t boot and no syslinux folder, resolved the same way.
  17. Just updated a friends system from 5.0.5 to v6 using the plugin, after reboot the pen didn’t boot, inserted pen in my pc and it was missing the syslinux folder, copied it from a v6 install and clicked make_bootable and then booted to v6 successfully.
  18. Thanks for these instructions, worked perfectly. Fastest controllers I tried, can get 300MB/s+ parity check on my test server with 8 SSDs.
  19. I had the opportunity to test the “real word” bandwidth of some commonly used controllers in the community, so I’m posting my results in the hopes that it may help some users choose a controller and others understand what may be limiting their parity check/sync speed. Note that these tests are only relevant for those operations, normal read/writes to the array are usually limited by hard disk or network speed. Next to each controller is its maximum theoretical throughput and my results depending on the number of disks connected, result is observed parity/read check speed using a fast SSD only array with Unraid V6 Values in green are the measured controller power consumption with all ports in use. 2 Port Controllers SIL 3132 PCIe gen1 x1 (250MB/s) 1 x 125MB/s 2 x 80MB/s Asmedia ASM1061 PCIe gen2 x1 (500MB/s) - e.g., SYBA SY-PEX40039 and other similar cards 1 x 375MB/s 2 x 206MB/s JMicron JMB582 PCIe gen3 x1 (985MB/s) - e.g., SYBA SI-PEX40148 and other similar cards 1 x 570MB/s 2 x 450MB/s 4 Port Controllers SIL 3114 PCI (133MB/s) 1 x 105MB/s 2 x 63.5MB/s 3 x 42.5MB/s 4 x 32MB/s Adaptec AAR-1430SA PCIe gen1 x4 (1000MB/s) 4 x 210MB/s Marvell 9215 PCIe gen2 x1 (500MB/s) - 2w - e.g., SYBA SI-PEX40064 and other similar cards (possible issues with virtualization) 2 x 200MB/s 3 x 140MB/s 4 x 100MB/s Marvell 9230 PCIe gen2 x2 (1000MB/s) - 2w - e.g., SYBA SI-PEX40057 and other similar cards (possible issues with virtualization) 2 x 375MB/s 3 x 255MB/s 4 x 204MB/s IBM H1110 PCIe gen2 x4 (2000MB/s) - LSI 2004 chipset, results should be the same as for an LSI 9211-4i and other similar controllers 2 x 570MB/s 3 x 500MB/s 4 x 375MB/s Asmedia ASM1064 PCIe gen3 x1 (985MB/s) - e.g., SYBA SI-PEX40156 and other similar cards 2 x 450MB/s 3 x 300MB/s 4 x 225MB/s Asmedia ASM1164 PCIe gen3 x2 (1970MB/s) - NOTE - not actually tested, performance inferred from the ASM1166 with up to 4 devices 2 x 565MB/s 3 x 565MB/s 4 x 445MB/s 5 and 6 Port Controllers JMicron JMB585 PCIe gen3 x2 (1970MB/s) - 2w - e.g., SYBA SI-PEX40139 and other similar cards 2 x 570MB/s 3 x 565MB/s 4 x 440MB/s 5 x 350MB/s Asmedia ASM1166 PCIe gen3 x2 (1970MB/s) - 2w 2 x 565MB/s 3 x 565MB/s 4 x 445MB/s 5 x 355MB/s 6 x 300MB/s 8 Port Controllers Supermicro AOC-SAT2-MV8 PCI-X (1067MB/s) 4 x 220MB/s (167MB/s*) 5 x 177.5MB/s (135MB/s*) 6 x 147.5MB/s (115MB/s*) 7 x 127MB/s (97MB/s*) 8 x 112MB/s (84MB/s*) * PCI-X 100Mhz slot (800MB/S) Supermicro AOC-SASLP-MV8 PCIe gen1 x4 (1000MB/s) - 6w 4 x 140MB/s 5 x 117MB/s 6 x 105MB/s 7 x 90MB/s 8 x 80MB/s Supermicro AOC-SAS2LP-MV8 PCIe gen2 x8 (4000MB/s) - 6w 4 x 340MB/s 6 x 345MB/s 8 x 320MB/s (205MB/s*, 200MB/s**) * PCIe gen2 x4 (2000MB/s) ** PCIe gen1 x8 (2000MB/s) LSI 9211-8i PCIe gen2 x8 (4000MB/s) - 6w – LSI 2008 chipset 4 x 565MB/s 6 x 465MB/s 8 x 330MB/s (190MB/s*, 185MB/s**) * PCIe gen2 x4 (2000MB/s) ** PCIe gen1 x8 (2000MB/s) LSI 9207-8i PCIe gen3 x8 (4800MB/s) - 9w - LSI 2308 chipset 8 x 565MB/s LSI 9300-8i PCIe gen3 x8 (4800MB/s with the SATA3 devices used for this test) - LSI 3008 chipset 8 x 565MB/s (425MB/s*, 380MB/s**) * PCIe gen3 x4 (3940MB/s) ** PCIe gen2 x8 (4000MB/s) SAS Expanders HP 6Gb (3Gb SATA) SAS Expander - 11w Single Link with LSI 9211-8i (1200MB/s*) 8 x 137.5MB/s 12 x 92.5MB/s 16 x 70MB/s 20 x 55MB/s 24 x 47.5MB/s Dual Link with LSI 9211-8i (2400MB/s*) 12 x 182.5MB/s 16 x 140MB/s 20 x 110MB/s 24 x 95MB/s * Half 6GB bandwidth because it only links @ 3Gb with SATA disks Intel® SAS2 Expander RES2SV240 - 10w Single Link with LSI 9211-8i (2400MB/s) 8 x 275MB/s 12 x 185MB/s 16 x 140MB/s (112MB/s*) 20 x 110MB/s (92MB/s*) * Avoid using slower linking speed disks with expanders, as it will bring total speed down, in this example 4 of the SSDs were SATA2, instead of all SATA3. Dual Link with LSI 9211-8i (4000MB/s) 12 x 235MB/s 16 x 185MB/s Dual Link with LSI 9207-8i (4800MB/s) 16 x 275MB/s LSI SAS3 expander (included on a Supermicro BPN-SAS3-826EL1 backplane) Single Link with LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 2200MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds) 8 x 500MB/s 12 x 340MB/s Dual Link with LSI 9300-8i (*) 10 x 510MB/s 12 x 460MB/s * tested with SATA3 devices, max usable bandwidth would be 4400MB/s, but with LSI's Databolt technology we can closer to SAS3 speeds, with SAS3 devices limit here would be the PCIe link, which should be around 6600-7000MB/s usable. HP 12G SAS3 EXPANDER (761879-001) Single Link with LSI 9300-8i (2400MB/s*) 8 x 270MB/s 12 x 180MB/s 16 x 135MB/s 20 x 110MB/s 24 x 90MB/s Dual Link with LSI 9300-8i (4800MB/s*) 10 x 420MB/s 12 x 360MB/s 16 x 270MB/s 20 x 220MB/s 24 x 180MB/s * tested with SATA3 devices, no Databolt or equivalent technology, at least not with an LSI HBA, with SAS3 devices limit here would be the around 4400MB/s with single link, and the PCIe slot with dual link, which should be around 6600-7000MB/s usable. Intel® SAS3 Expander RES3TV360 Single Link with LSI 9308-8i (*) 8 x 490MB/s 12 x 330MB/s 16 x 245MB/s 20 x 170MB/s 24 x 130MB/s 28 x 105MB/s Dual Link with LSI 9308-8i (*) 12 x 505MB/s 16 x 380MB/s 20 x 300MB/s 24 x 230MB/s 28 x 195MB/s * tested with SATA3 devices, PMC expander chip includes similar functionality to LSI's Databolt, with SAS3 devices limit here would be the around 4400MB/s with single link, and the PCIe slot with dual link, which should be around 6600-7000MB/s usable. Note: these results were after updating the expander firmware to latest available at this time (B057), it was noticeably slower with the older firmware that came with it. Sata 2 vs Sata 3 I see many times on the forum users asking if changing to Sata 3 controllers or disks would improve their speed, Sata 2 has enough bandwidth (between 265 and 275MB/s according to my tests) for the fastest disks currently on the market, if buying a new board or controller you should buy sata 3 for the future, but except for SSD use there’s no gain in changing your Sata 2 setup to Sata 3. Single vs. Dual Channel RAM In arrays with many disks, and especially with low “horsepower” CPUs, memory bandwidth can also have a big effect on parity check speed, obviously this will only make a difference if you’re not hitting a controller bottleneck, two examples with 24 drive arrays: Asus A88X-M PLUS with AMD A4-6300 dual core @ 3.7Ghz Single Channel – 99.1MB/s Dual Channel - 132.9MB/s Supermicro X9SCL-F with Intel G1620 dual core @ 2.7Ghz Single Channel – 131.8MB/s Dual Channel – 184.0MB/s DMI There is another bus that can be a bottleneck for Intel based boards, much more so than Sata 2, the DMI that connects the south bridge or PCH to the CPU. Socket 775, 1156 and 1366 use DMI 1.0, socket 1155, 1150 and 2011 use DMI 2.0, socket 1151 uses DMI 3.0 DMI 1.0 (1000MB/s) 4 x 180MB/s 5 x 140MB/s 6 x 120MB/s 8 x 100MB/s 10 x 85MB/s DMI 2.0 (2000MB/s) 4 x 270MB/s (Sata2 limit) 6 x 240MB/s 8 x 195MB/s 9 x 170MB/s 10 x 145MB/s 12 x 115MB/s 14 x 110MB/s DMI 3.0 (3940MB/s) 6 x 330MB/s (Onboard SATA only*) 10 X 297.5MB/s 12 x 250MB/s 16 X 185MB/s *Despite being DMI 3.0** , Skylake, Kaby Lake, Coffee Lake, Comet Lake and Alder Lake chipsets have a max combined bandwidth of approximately 2GB/s for the onboard SATA ports. **Except low end H110 and H310 chipsets which are only DMI 2.0, Z690 is DMI 4.0 and not yet tested by me, but except same result as the other Alder Lake chipsets. DMI 1.0 can be a bottleneck using only the onboard Sata ports, DMI 2.0 can limit users with all onboard ports used plus an additional controller onboard or on a PCIe slot that shares the DMI bus, in most home market boards only the graphics slot connects directly to CPU, all other slots go through the DMI (more top of the line boards, usually with SLI support, have at least 2 slots), server boards usually have 2 or 3 slots connected directly to the CPU, you should always use these slots first. You can see below the diagram for my X9SCL-F test server board, for the DMI 2.0 tests I used the 6 onboard ports plus one Adaptec 1430SA on PCIe slot 4. UMI (2000MB/s) - Used on most AMD APUs, equivalent to intel DMI 2.0 6 x 203MB/s 7 x 173MB/s 8 x 152MB/s Ryzen link - PCIe 3.0 x4 (3940MB/s) 6 x 467MB/s (Onboard SATA only) I think there are no big surprises and most results make sense and are in line with what I expected, exception maybe for the SASLP that should have the same bandwidth of the Adaptec 1430SA and is clearly slower, can limit a parity check with only 4 disks. I expect some variations in the results from other users due to different hardware and/or tunnable settings, but would be surprised if there are big differences, reply here if you can get a significant better speed with a specific controller. How to check and improve your parity check speed System Stats from Dynamix V6 Plugins is usually an easy way to find out if a parity check is bus limited, after the check finishes look at the storage graph, on an unlimited system it should start at a higher speed and gradually slow down as it goes to the disks slower inner tracks, on a limited system the graph will be flat at the beginning or totally flat for a worst-case scenario. See screenshots below for examples (arrays with mixed disk sizes will have speed jumps at the end of each one, but principle is the same). If you are not bus limited but still find your speed low, there’s a couple things worth trying: Diskspeed - your parity check speed can’t be faster than your slowest disk, a big advantage of Unraid is the possibility to mix different size disks, but this can lead to have an assortment of disk models and sizes, use this to find your slowest disks and when it’s time to upgrade replace these first. Tunables Tester - on some systems can increase the average speed 10 to 20Mb/s or more, on others makes little or no difference. That’s all I can think of, all suggestions welcome.
  20. The same tunables results for me happen when there is a bus bottleneck or it’s hitting the maximum speed of the slowest disk in all settings. Are all your WD30EZRX disks 1TB/platter your do you have some 750GB/platter? If all are 1TB/platter parity check speed should start at 145/150MB/s, if you have at least one 750GB/platter parity should start at about 125/130 Mb/s. But this can’t explain that according to what you posted in the SAS2LP thread, your starting speed is higher than the SAS2LP and in the end the average speed is lower.
  21. Never had any problems and most of my servers only have 4Gb, but take note of original values just in case.
  22. Poll_attributes is for reading disk temperatures, defaults to 1800sec, doesn’t affect speed unless set too low so it’s constantly polling the disks. In Unraid 6 md_write_limit is not visible in the GUI, I don’t know if it’s still used but I still set it to the value recommend by tunnables, you can find all 3 settings in your flash /config/disk.cfg As for what values to set I always choose unthrottled but it depends on the amount of RAM you have, settings will only take effect after rebooting.
  23. I believe that it is always better than leaving it on overheating, the disk will cool down slowly, also you can set a low critical temp, in my case my disks are normally at around 35C/95F, my Hot warning is at 40C/104C and Critical at 45C/113F, I now that's a low critical temp but if any of my disks reach 45C there's something wrong with the cooling. Would like to add to my request: send email before shutdown like "disk x reached critical temp, server shutdown in 60 seconds" not so important but nice, possibility to abort the shutdown Thanks
  24. Would love an option for the server to perform a clean shutdown if a disk reaches critical temp. Yesterday a fan from a 5in3 cage went bad and thanks to notifications system I got an email and was able to remotely shutdown my server, but then I thought, what if I was somewhere without internet? Or if the fan broke during the night? I would wake up to 5 cooked disks :'( Thanks in advance.
  25. I used rsync -av --progress --remove-source-files /mnt/diskX/ /mnt/diskY/ to convert my disks to XFS and one of them red balled during rsync, in my case it corrupted one file.
×
×
  • Create New...