lonnie776

Members
  • Content Count

    37
  • Joined

  • Last visited

Community Reputation

2 Neutral

About lonnie776

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I also have this problem. For me it is a bit bigger of an issue though as I run 4 VM's on Unraid 2 of which are critical machines. Reboots are extremly disruptive. I would love to find out why this keeps happening. happens to me once ever couple weeks.
  2. Ahh yes I have a poweredge c1000 with a PERC5/E and it has awful throughput. Raid 10 on that thing was still crazy slow if the reads were not contiguous.
  3. Yup, I am very happy with the results. A little sad I won't be able to pull a true 10Gb out of my nic though. But ces la vile
  4. Per disk is around 100MB/s. I have a few old disks that might be slowing things down.
  5. I figured out my issues. Issue #1: Only 4x Connection Solution: When I was playing around with the cables trying to figure something else out I noticed the one cable to my HBA had not clicked in properly. Reinsterted, rebooted and voila, x8 connection but still a limit of 425MB/s. Issue #2: Total bandwidth of 425MB/s MAX Solution: it turns out that on my particular motherboard (Asus x99-a) one 3 of the 6 PCIe slots are 3.0. Also my CPU (Intel 5820K) only has 28 available PCIe lanes the combination of these factors meant that with all the extra cards I have ins
  6. Oh ok, so I assume you have been testing only the disks behind the expander and are getting speeds above the ~500MB/s that we have been getting?
  7. Ahh that makes sense, I would also imagine that the linking speed spills all the way back to the expander and HBA. Also Johnnie, I noticed you have 2 HBA's in your system. Is one builtin or are you putting two HBA's into your sas expander?
  8. I was able to find this little nugget which gives some insight into the theoretical vs actual throughput of a sas cable running at 3gpbs per link. Just a though but is there a way to force a 6gbps connection to the drives as all mine support it but by default they connect at sata 2.0 (3gb/s) for stability/compatability reasons.
  9. So i had another look at your diags you posted and there is most definately 2 expanders if not 3. But is there also a P410i in the MD1000? If there is I would disconnect that as I suspect that would allocate some bandwidth to itself. Also Johnnie, would you mind posting your diags so I know what a properly setup system should look like. Specifically I am interested in the LSSCSI.txt file under system.
  10. Yup that works for me. So just to reiterate, your system reports an 8x connection through the HBA and you are running 8 disks off of 1 4x port in the rear and 2 disks in the front cage which should have a full connection each (No multiplexing)? So I am seeing that you are running through a second, integrated expander in the MD1000. Does that sound correct?
  11. No I think you are on the right track. I am running 25 disks over 1 4x connection right now (Even though 2 physical cables are connected) and I am only getting another half the bandwidth I should be getting over that 4x port. I would be interested in seeing how you have everything cabled. I am also interested in trying some different cables on my system. I'll do some more troubleshooting in a day or so as the parity check should be done by then. I'll make sure to document and report my findings.
  12. 2 questions for you Johnnie. 1. Are you using the newer 12Gb/s model of the HP sas expander? 2. Can you check your Expander and see how it's cabled? Also there should be LED's near the back IO plate, can you see what yours look like? I have mine cabled with the HBA going to the two ports marked specifically for the HBA. Also I saw a note on the expander about the two cables needing to be the same length going to the HBA... Makes sense. Are yours the same length?
  13. FYI here are the results from a copy from ssd to ssd (Cache to Cache Raid 10) which is also connected to the hp sas expander. Me thinks I see a common thread. an x8 vs x4 test should show a doubling of speed.
  14. PCIe 3.0 x16 running in x8 i believe.
  15. Hmm... Very interesting. I believe mine is running on the latest firmware as well however my HBA is not. I'll try re-cabling and a firmware update once my parity check is done as I should be able to get 8x with my setup. I'll also test my cache drives the same way you tested your but mine will be from cache to cache. Also you could always run a multi-meter across the cable to test that all the pins are actually connected and have low resistance.