bsim

Members
  • Posts

    191
  • Joined

  • Last visited

Everything posted by bsim

  1. So far (running parity check right now) the slower speeds have nothing to do with the single thread rating of the processors...I'm now running double the single thread rating processors (dual Opteron 6272's for dual Opteron 6328's) as the originals...and almost no change at all. I saw a bigger jump (~15MB/s) migrating 3 1tb drives onto one 5tb drive and pulling the 1tb drives out of the array. Still pulling only mid 50's from a parity check. I'm still leaning towards my issue being a limitation of the motherboard OR some sort of connection between the motherboard and unraid (drivers). Is there a way to tell what driver is being used for the drives through unraid? Is there a different driver for ahci vs the driver or are they all rolled into one? Can I tell what ahci driver is being used? --------------------------------------------------------------------------------------- UPDATE: I may have answered my question with more research... dmesg | grep -i --color ahci ahci 0000:00:11.0: AHCI 0001.0100 32 slots 6 ports 3 Gbps 0x3f impl SATA mode grep -i SATA /var/log/syslog | grep --color -i 'link up' kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Are these only referencing the odd motherboard unassigned device I have or is this being used for the entire array?
  2. Added 2 5TB drives, removed 3 1TB drives...will remove 3 2TB drives...plan on adding another 5TB drive...so yes, 3 slow drives will be removed overall...I was primarily thinking of the older drive speeds dragging down my averages...the replacements will bring up my average direct drive speeds by 50-75MB/s...question is if this translates into faster parity speeds...I will find out!
  3. As an update, I've removed/replaced the 1TB drives from the array and my parity speeds went to the 50s...still low but better...plan on migrating out the 2tb drives to see if it affects me further. I did confirm that my supermicro backplane is simply a pass through. While I have the different processors on hand, I plan on swapping in dual AMD Opteron 6328's OS6328WKT8GHK...they about double my single thread rating (higher speeds, but lower cores)...a step just to determine if the processors are really holding me back (vs the motherboard itself). After that, I plan on swapping in a new motherboard completely with a Supermicro X9DR3-LN4F+ and dual Intel E5-2637V2's, I gain my built in KMV again, and gain 4 PCIe3x16s. I can't find the old motherboard (Supermicro H8DGi) bus speed/lanes anywhere but it was PCIe2, the new motherboard will have a bus speed of up to 8 GT/s and PCIe3. Gives me a great upgrade path to an SSD rack for the future.
  4. Thats the point, new config then unassigning drives=the drive icons never went blue, and never gave me the option to start the array until I unassigned before new config, then got the blueballs and the option to start rebuilding.
  5. I did attempt to unassign them twice after "new config", neither took (just invalid configuration both times)...could it be the removal of three drives at once that caused a glitch? As soon as I did unassign before "new config" it worked flawlessly. Overall doing it before does seem to make much more logical sense for the process.
  6. It looks like the "shrink array" page is the problem...It has you unassigning the drives after the "new config". Using a bit of logic I was able to figure out that unassigning the drives then "new config" works beautifully. The document needs to be fixed.
  7. Problems following https://wiki.unraid.net/Shrink_array --------------------------------------- I have a dual parity array and have recently done a full parity check without error, running the latest unraid pro... I have a full printout of all drive assignments I rsynced (with remove) all data off of three old 1tb hard drives (to the array) and confirmed they are empty I shut down the array, I changed the included shares list to only include the drives I want to keep in the array (checked all but 3 drives) Then tools, new config, retain all, yes, apply On main, I unassigned the three drives to be removed double checked all other drives are listed correctly I cannot start the array due to "invalid configuration" to rebuild parity without the drives. What am I missing? Is this a bug? Is there a work around? Shouldn't I be able to remove as many drives as I want and just rebuild parity?
  8. does this conversation sound like my processor single thread rating is my problem? Is there anything else that may be the problem?
  9. Is there a reason why parity doesn't take advantage of multi core cpus and only uses a single thread? Is this a holdover from the old unraid versions? I would think with parity just being a lot of selective stripes of reads, unraid would be able to blast multiple cores all at once. Is there any future probability of it becoming truely multi core? Would Tom be the best guy to ask?
  10. from here....https://www.cpubenchmark.net/singleThread.html it shows that If I swap in my spare Opteron 6328's, I should double my single thread performance...As my MB only supports opteron 6000's max, im thinking that this is the best i can do...ill see how it affects the parity check speed...else i may try swapping in a different motherboard. the bigger question is why a server level software only uses a single thread to do parity! I would have figured with how many cores I had, I wouldnt have had any bottlenecks processor wise...This seems like a HUGE holdover from the older unraid days...is there any mention of unraid ever migrating?
  11. Would it really be killing my speed that drastically? If I were to swap out the processors with say Opteron 6386 SE's (which have almost double the single thread rating (~1200 vs 6272 ~600) - highest opteron 6000 series supported) what kind of improvement would I see? Does the parity check always only run on a single thread? Is it possible to change it to multi? Would my supermicro backplane be a possible culprit?
  12. Arecas are out of the equation completely at this point...now 2 LSI controllers (9207-8i (SAS2308) and 9201-16i (SAS2116)) (with a SAS2008 running my SSD cache drives and some extra odd drives) I attempted those tunables and still not breaking past the 40s. I have Dual AMD Opteron 6272 @ 2100 processors (32 Cores total) and 44GB of RAM (Supermicro H8DGi Motherboard) The only other item in the configuration of interest might be the backplane I use in my supermicro case...but I doubt that it would cause a bottleneck being that it supports the large drives I have without any issues. Could it cause a bottleneck? I attached a load statistics not doing a parity check... load statistics doing a parity check...
  13. I've considered running the tunables tester, but it hasn't been updated for a long while and doesn't even test the same tunables available to the most recent versions of unraid. would be great if it was updated however. What was the difference between the 40 you were getting and what you got afterwards? Did you run the tunables script? What unraid version did you run it on at the time?
  14. Final Verdict: ****24 spinning drives do not max out one PCIe1x8 Areca enough to slow down a parity check. I did run a speed test on all of my drives before switching from the dual areca's to the LSI controllers just to eliminate any possible variables. The results were consistent between the two setups. My next step, will be to slowly phase out the slower drives, (it is interesting to see the speeds of a drive directly correlate with the size of the drive), but that will happen only as I need additional space.
  15. I'm thinking upgrading the cards to my next stage will be moot...Here is what I've found...I expected more, spreading them to separate controllers. Took 12/24 spinning drives off of the Areca 1280ML 2GB (PCIe1 x8) and put half of them on the backup Areca 1280ML 2GB card...I had to see if there was any movement on the Parity check times/transfer rates. The transfer of the drives to the second card worked flawlessly/automatically to my surprise. I had a screenshot of the array just in case I had to remap things out! Next step of research... I have two new LSI IT mode cards that will run in two of my true PCIe2 x16 slots... LSI SAS 9201-16i 6Gbps PCIe2 x8 LSI SAS 9207-8i PCIe3 x8 Nice part of this next migration will allow me to actually pull smart data (dashboard temps, preclear smart reports...) from the array (Unraid full support for Areca sucks and has hard coded issues that have never/will never be addressed)
  16. Those "hgst hdn721010ale604"'s are NAS rated drives...should be comparable to the Ironwolf models that are slightly cheaper. I noticed all the rest on the list are only 150MB/s. It would be awesome if the higher speeds would come down in price, but I can get 15TB (5TB drives at 99$ a piece) to get me to that price with still money left over.
  17. Thanks for helping out. I would agree that SSD's do tend to come in the 500 MB/s range (not sure why I included SSD's anyway). I did oops on the PCI3x8 max 16,000 is PCIe4 (not sure why I included it to begin with)! Regarding spinners, I have never seen a common consumer desktop drive get besides the 170/180 range even for their higher end Blacks and Red versions. Here are several crystal disk marks for typical 4TB drives. Normal drives are MUCH lower. Maximums or burst rates I've seen advertised as huge worthless numbers before. I think that Parity checks are closer to the 512k read/writes rather than sequential.
  18. Thank you for the help! I used your help to really start digging and now really understand. Is there anything that doesn't look right below? ------------------------------------------------------------------------------------ Thinking out loud below and working through my questions for others reference... It looks like the PCIe limitations of the card puts me minimally above the necessary bus rate. So it sounds like just offloading half of the drives onto a second Areca 1280ML (was a backup in storage) would nullify any loss for my collection of spinners. ------------------------------------------------------------------------------------ My motherboard Supermicro H8DGi (https://www.supermicro.com/Aplus/motherboard/Opteron6000/SR56x0/H8DGi.cfm) (Full virtualization, 44GB, 32 Cores) and has 3 PCIE-2 x16's and, one PCIE-2x8 that I can use. I don't really have a problem using 3 of my 4 available slots for controllers or up buying to PCIX-3 when I only have PCIE-2 right now. I considered upgrading to an Adaptec 71605 (16 Drives) and a Dell PERC H200 (8 drives) rather than just running 3 Dell PERC H200's (8 drives x 3) or 3 SAS 9207-8i's (8 drives x 3). The bigger question is, if I do save myself a slot and use the 16 drive Adaptec, will the 16 drives cause another bottleneck? How much bus bandwidth does each spinner typically use? How much room does the bus bandwidth math leave for adding SSD's in the future if I just go to 2x Adaptec 71605 's ((16+8)+8)? How much bus bandwidth does an SSD typically use?. ------------------------------------------------------------------------------------ From what I've read (assuming theoretical numbers): 7200 Desktop (non-black/red) spinner ~ 150 MB/second (a bit high I think for parity type reads) SSD ~ 500 MB/second PCIe 1.0 x8 ~ 2,000 MB/s (4 SSD's OR 13 x 7200s) PCIe 2.0 x8 ~ 4,000 MB/s (8 SSD's OR 26 x 7200s) PCIe 3.0 x8 ~ 8,000 MB/s (16 SSD's OR 53 x 7200's) PCIe 4.0 x8 ~ 16,000 MB/s (32 SSD's OR 106 x 7200's) In my case with an Areca 1280ML, I am currently running: 24 x 7200 Spinners ~ 24x150=3,600 MB/s (Areca would max out at less than half that) Using 12 spinners on 2 separate Areca 1280ML's would only use 1,800 per card (may be close to maxed out with REAL reads) Assuming 75% theoretical to actual for bandwidth...This may make it close! ------------------------------------------------------------------------------------ Reason SATA interface isn't an issue for spinners: SATA 2 = 300 MB/s SATA 3 = 600 MB/s
  19. I currently have an Areca ARC-1280ML 24 Port with 2GB ECC onboard (Intel IOP341 Processor) running only JBOD (got 2 it for REALLY cheap). 23/24 spinners (mostly 7200RPM, Some 5400) and I'm trying to decide if replacing my controller card with separate 2 or 3 controller cards (Most likely SAS2008's) for the 24 drives would cause my read speeds to increase. Currently, my parity checks (All XFS drives) take about 1 day/12 hr at about 38.9 MB/s. From what I've researched, with spinners, the SATA2 of the Areca shouldn't be a limiting issue. From the documentation on the card, the best transfer rate figures I can find is 885MB/s sustained read at Raid0. The card has 6 SAS 8087 ports. The documentation doesn't really tell me what my transfer rates are for each SAS port (beyond the overall 885MB/s generic transfer rate. Do I have a bottleneck with my controller card? Will splitting the drives over multiple SAS2008's increase my read speeds/lower my parity check times?
  20. I've been searching through the site for a while and can't find any retouches on this idea for last few years (and releases)...but having a large array a server with plenty of resources makes the parity check a teeth gnashing experience when the whole system network availability basically goes down for a day or two every month. I would suspect that people go without parity checks because of this HUGE drawback. I've seen that this feature may have been introduced in the code towards the beginning of unraid's initial release, but probably was put on hold because of larger concerns. If writing to an array going through a parity check is an issue, perhaps just allowing network read only access (SMB...) and all writes temporarily redirected to use cache drives would solve the problem temporarily? Desperately seeking the feature (while banging head against desk)!
  21. Does it have a possibility of being abused in a network?
  22. So, I've been learning about unraids network attack surface, and running Nessus against my server shows a few Medium vulnerabilities... IP Forwarding from my experience, unless the system is a router/firewall should be disabled...Should I disable it? ------------------------------------------------------------------------------------ IP Forwarding Enabled To disable in linux... "echo 0 > /proc/sys/net/ipv4/ip_forward"
  23. I've seen this happen to others where it ends up being a fluke or a cable that flickered...Received a red X on a drive from the array (~4000 errors in a row), pulled the drive, and have been testing this thing to the max (HDTune full error scan, and Preclear script)...and still nothing really shows wrong with it. Was it a fluke that I can just preclear and put back into the array? Should I trust it? I attached a crystaldiskinfo smart report. The errors occurred during a parity check...the only thing that I can think of was my tunables supposedly were creating log errors being set too high. Could this be what kicked the drive out? I do run on an Areca card that did call the drive/volume failure. This drive would be my first 5TB to go bad (it's only a few years old). I do run dual parity and have two precleared replacements. Any Advice?
  24. I saw this script, but with it only having been updated 5 years ago and a few major tunables added/removed (Inside and outside the gui) I didn't want to assume it would work on the latest version of unraid (potentially giving tunables that are no longer relevant)...Does anyone know if the script is still relevant/usable for the latest version of unraid?
  25. If I get call traces, is there one tunable that I need to back off the most?