Jump to content

bsim

Members
  • Content Count

    159
  • Joined

  • Last visited

Community Reputation

0 Neutral

About bsim

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. does this conversation sound like my processor single thread rating is my problem? Is there anything else that may be the problem?
  2. Is there a reason why parity doesn't take advantage of multi core cpus and only uses a single thread? Is this a holdover from the old unraid versions? I would think with parity just being a lot of selective stripes of reads, unraid would be able to blast multiple cores all at once. Is there any future probability of it becoming truely multi core? Would Tom be the best guy to ask?
  3. from here....https://www.cpubenchmark.net/singleThread.html it shows that If I swap in my spare Opteron 6328's, I should double my single thread performance...As my MB only supports opteron 6000's max, im thinking that this is the best i can do...ill see how it affects the parity check speed...else i may try swapping in a different motherboard. the bigger question is why a server level software only uses a single thread to do parity! I would have figured with how many cores I had, I wouldnt have had any bottlenecks processor wise...This seems like a HUGE holdover from the older unraid days...is there any mention of unraid ever migrating?
  4. Would it really be killing my speed that drastically? If I were to swap out the processors with say Opteron 6386 SE's (which have almost double the single thread rating (~1200 vs 6272 ~600) - highest opteron 6000 series supported) what kind of improvement would I see? Does the parity check always only run on a single thread? Is it possible to change it to multi? Would my supermicro backplane be a possible culprit?
  5. Arecas are out of the equation completely at this point...now 2 LSI controllers (9207-8i (SAS2308) and 9201-16i (SAS2116)) (with a SAS2008 running my SSD cache drives and some extra odd drives) I attempted those tunables and still not breaking past the 40s. I have Dual AMD Opteron 6272 @ 2100 processors (32 Cores total) and 44GB of RAM (Supermicro H8DGi Motherboard) The only other item in the configuration of interest might be the backplane I use in my supermicro case...but I doubt that it would cause a bottleneck being that it supports the large drives I have without any issues. Could it cause a bottleneck? I attached a load statistics not doing a parity check... load statistics doing a parity check...
  6. I've considered running the tunables tester, but it hasn't been updated for a long while and doesn't even test the same tunables available to the most recent versions of unraid. would be great if it was updated however. What was the difference between the 40 you were getting and what you got afterwards? Did you run the tunables script? What unraid version did you run it on at the time?
  7. Final Verdict: ****24 spinning drives do not max out one PCIe1x8 Areca enough to slow down a parity check. I did run a speed test on all of my drives before switching from the dual areca's to the LSI controllers just to eliminate any possible variables. The results were consistent between the two setups. My next step, will be to slowly phase out the slower drives, (it is interesting to see the speeds of a drive directly correlate with the size of the drive), but that will happen only as I need additional space.
  8. I'm thinking upgrading the cards to my next stage will be moot...Here is what I've found...I expected more, spreading them to separate controllers. Took 12/24 spinning drives off of the Areca 1280ML 2GB (PCIe1 x8) and put half of them on the backup Areca 1280ML 2GB card...I had to see if there was any movement on the Parity check times/transfer rates. The transfer of the drives to the second card worked flawlessly/automatically to my surprise. I had a screenshot of the array just in case I had to remap things out! Next step of research... I have two new LSI IT mode cards that will run in two of my true PCIe2 x16 slots... LSI SAS 9201-16i 6Gbps PCIe2 x8 LSI SAS 9207-8i PCIe3 x8 Nice part of this next migration will allow me to actually pull smart data (dashboard temps, preclear smart reports...) from the array (Unraid full support for Areca sucks and has hard coded issues that have never/will never be addressed)
  9. Those "hgst hdn721010ale604"'s are NAS rated drives...should be comparable to the Ironwolf models that are slightly cheaper. I noticed all the rest on the list are only 150MB/s. It would be awesome if the higher speeds would come down in price, but I can get 15TB (5TB drives at 99$ a piece) to get me to that price with still money left over.
  10. Thanks for helping out. I would agree that SSD's do tend to come in the 500 MB/s range (not sure why I included SSD's anyway). I did oops on the PCI3x8 max 16,000 is PCIe4 (not sure why I included it to begin with)! Regarding spinners, I have never seen a common consumer desktop drive get besides the 170/180 range even for their higher end Blacks and Red versions. Here are several crystal disk marks for typical 4TB drives. Normal drives are MUCH lower. Maximums or burst rates I've seen advertised as huge worthless numbers before. I think that Parity checks are closer to the 512k read/writes rather than sequential.
  11. Thank you for the help! I used your help to really start digging and now really understand. Is there anything that doesn't look right below? ------------------------------------------------------------------------------------ Thinking out loud below and working through my questions for others reference... It looks like the PCIe limitations of the card puts me minimally above the necessary bus rate. So it sounds like just offloading half of the drives onto a second Areca 1280ML (was a backup in storage) would nullify any loss for my collection of spinners. ------------------------------------------------------------------------------------ My motherboard Supermicro H8DGi (https://www.supermicro.com/Aplus/motherboard/Opteron6000/SR56x0/H8DGi.cfm) (Full virtualization, 44GB, 32 Cores) and has 3 PCIE-2 x16's and, one PCIE-2x8 that I can use. I don't really have a problem using 3 of my 4 available slots for controllers or up buying to PCIX-3 when I only have PCIE-2 right now. I considered upgrading to an Adaptec 71605 (16 Drives) and a Dell PERC H200 (8 drives) rather than just running 3 Dell PERC H200's (8 drives x 3) or 3 SAS 9207-8i's (8 drives x 3). The bigger question is, if I do save myself a slot and use the 16 drive Adaptec, will the 16 drives cause another bottleneck? How much bus bandwidth does each spinner typically use? How much room does the bus bandwidth math leave for adding SSD's in the future if I just go to 2x Adaptec 71605 's ((16+8)+8)? How much bus bandwidth does an SSD typically use?. ------------------------------------------------------------------------------------ From what I've read (assuming theoretical numbers): 7200 Desktop (non-black/red) spinner ~ 150 MB/second (a bit high I think for parity type reads) SSD ~ 500 MB/second PCIe 1.0 x8 ~ 2,000 MB/s (4 SSD's OR 13 x 7200s) PCIe 2.0 x8 ~ 4,000 MB/s (8 SSD's OR 26 x 7200s) PCIe 3.0 x8 ~ 8,000 MB/s (16 SSD's OR 53 x 7200's) PCIe 4.0 x8 ~ 16,000 MB/s (32 SSD's OR 106 x 7200's) In my case with an Areca 1280ML, I am currently running: 24 x 7200 Spinners ~ 24x150=3,600 MB/s (Areca would max out at less than half that) Using 12 spinners on 2 separate Areca 1280ML's would only use 1,800 per card (may be close to maxed out with REAL reads) Assuming 75% theoretical to actual for bandwidth...This may make it close! ------------------------------------------------------------------------------------ Reason SATA interface isn't an issue for spinners: SATA 2 = 300 MB/s SATA 3 = 600 MB/s
  12. I currently have an Areca ARC-1280ML 24 Port with 2GB ECC onboard (Intel IOP341 Processor) running only JBOD (got 2 it for REALLY cheap). 23/24 spinners (mostly 7200RPM, Some 5400) and I'm trying to decide if replacing my controller card with separate 2 or 3 controller cards (Most likely SAS2008's) for the 24 drives would cause my read speeds to increase. Currently, my parity checks (All XFS drives) take about 1 day/12 hr at about 38.9 MB/s. From what I've researched, with spinners, the SATA2 of the Areca shouldn't be a limiting issue. From the documentation on the card, the best transfer rate figures I can find is 885MB/s sustained read at Raid0. The card has 6 SAS 8087 ports. The documentation doesn't really tell me what my transfer rates are for each SAS port (beyond the overall 885MB/s generic transfer rate. Do I have a bottleneck with my controller card? Will splitting the drives over multiple SAS2008's increase my read speeds/lower my parity check times?
  13. I've been searching through the site for a while and can't find any retouches on this idea for last few years (and releases)...but having a large array a server with plenty of resources makes the parity check a teeth gnashing experience when the whole system network availability basically goes down for a day or two every month. I would suspect that people go without parity checks because of this HUGE drawback. I've seen that this feature may have been introduced in the code towards the beginning of unraid's initial release, but probably was put on hold because of larger concerns. If writing to an array going through a parity check is an issue, perhaps just allowing network read only access (SMB...) and all writes temporarily redirected to use cache drives would solve the problem temporarily? Desperately seeking the feature (while banging head against desk)!
  14. Does it have a possibility of being abused in a network?
  15. So, I've been learning about unraids network attack surface, and running Nessus against my server shows a few Medium vulnerabilities... IP Forwarding from my experience, unless the system is a router/firewall should be disabled...Should I disable it? ------------------------------------------------------------------------------------ IP Forwarding Enabled To disable in linux... "echo 0 > /proc/sys/net/ipv4/ip_forward"