lonnie776

Members
  • Posts

    38
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

lonnie776's Achievements

Noob

Noob (1/14)

2

Reputation

  1. I have a large unraid server with 17 array drives 2 parity and RAID1 SSD Cache Pool. On that I run 3 VM's and up to 12 dockers some of which are io intensive. I often see high iowait % however in my case I know that it's because I am simply demanding too much from my disks, which would be fine if it didn't cripple every other part of the system. Years back I found a way around iowait consuming the whole CPU. Linux allows you to isolate CPU cores from the system so you can dedicate them to other tasks (VM/Docker). This way when the system is crippled by iowait, your VM's and Docker containers can continue to function happily on the isolated CPU cores, although IO may still suffer if accessing the array/pool causing the iowait. As I understand it, my situation is different than yours, but hopefully this trick will still help you work around some of the headaches. In order to isolate the cores, you have to go into your flash drive and edit /syslinux/syslinux.cfg Here is my default boot mode which I have edited to include "append isolcpus=4-9, 14-19". This option will force the system to run on 0-3, 10-13 leaving the isolated cores idle. label Unraid OS menu default kernel /bzimage append isolcpus=4-9,14-19 initrd=/bzroot I have an old hyperthreaded 10 core Xeon so I have 20 virtual cores 0-19. I chose to keep 4 cores for my system as plugins still run on the system, and I have isolated 6 cores for VM's and Docker containers. For this to work properly you must pin each VM and Docker to the isolated cores of your choosing. Now when you are plagued by iowait, your Dockers and VM's will still have processing power. I hope this helps. Edit: After looking into this a bit further, I found that this has been implemented in the GUI. Now you simply go to Settings->CPU Pinning.
  2. I also have this problem. For me it is a bit bigger of an issue though as I run 4 VM's on Unraid 2 of which are critical machines. Reboots are extremly disruptive. I would love to find out why this keeps happening. happens to me once ever couple weeks.
  3. Ahh yes I have a poweredge c1000 with a PERC5/E and it has awful throughput. Raid 10 on that thing was still crazy slow if the reads were not contiguous.
  4. Yup, I am very happy with the results. A little sad I won't be able to pull a true 10Gb out of my nic though. But ces la vile
  5. Per disk is around 100MB/s. I have a few old disks that might be slowing things down.
  6. I figured out my issues. Issue #1: Only 4x Connection Solution: When I was playing around with the cables trying to figure something else out I noticed the one cable to my HBA had not clicked in properly. Reinsterted, rebooted and voila, x8 connection but still a limit of 425MB/s. Issue #2: Total bandwidth of 425MB/s MAX Solution: it turns out that on my particular motherboard (Asus x99-a) one 3 of the 6 PCIe slots are 3.0. Also my CPU (Intel 5820K) only has 28 available PCIe lanes the combination of these factors meant that with all the extra cards I have installed I had poorly chosen a slot for my HBA and it was only getting a connection of 1x at PCIe 2.0 speeds. After looking into it PCIe 2.0 has a bandwidth limit of 500MB/s per lane. I swapped the card with my 10Gb NIC (which was in a 3.0 slot) and now I get over 2GB/s throughput at the cost of only having 500MB/s bandwidth on my NIC which in my opinion is the far lesser of two evils. I know this doesn't really help with your issues 1812 but I did try my HBA connections in different ports and it is truly indifferent as to port allocation on the Expander. Past that I would suggest looking into the MD1000 for any limitations there as what you have should supply 1100MB/s I hope this helps someone.
  7. Oh ok, so I assume you have been testing only the disks behind the expander and are getting speeds above the ~500MB/s that we have been getting?
  8. Ahh that makes sense, I would also imagine that the linking speed spills all the way back to the expander and HBA. Also Johnnie, I noticed you have 2 HBA's in your system. Is one builtin or are you putting two HBA's into your sas expander?
  9. I was able to find this little nugget which gives some insight into the theoretical vs actual throughput of a sas cable running at 3gpbs per link. Just a though but is there a way to force a 6gbps connection to the drives as all mine support it but by default they connect at sata 2.0 (3gb/s) for stability/compatability reasons.
  10. So i had another look at your diags you posted and there is most definately 2 expanders if not 3. But is there also a P410i in the MD1000? If there is I would disconnect that as I suspect that would allocate some bandwidth to itself. Also Johnnie, would you mind posting your diags so I know what a properly setup system should look like. Specifically I am interested in the LSSCSI.txt file under system.
  11. Yup that works for me. So just to reiterate, your system reports an 8x connection through the HBA and you are running 8 disks off of 1 4x port in the rear and 2 disks in the front cage which should have a full connection each (No multiplexing)? So I am seeing that you are running through a second, integrated expander in the MD1000. Does that sound correct?
  12. No I think you are on the right track. I am running 25 disks over 1 4x connection right now (Even though 2 physical cables are connected) and I am only getting another half the bandwidth I should be getting over that 4x port. I would be interested in seeing how you have everything cabled. I am also interested in trying some different cables on my system. I'll do some more troubleshooting in a day or so as the parity check should be done by then. I'll make sure to document and report my findings.
  13. 2 questions for you Johnnie. 1. Are you using the newer 12Gb/s model of the HP sas expander? 2. Can you check your Expander and see how it's cabled? Also there should be LED's near the back IO plate, can you see what yours look like? I have mine cabled with the HBA going to the two ports marked specifically for the HBA. Also I saw a note on the expander about the two cables needing to be the same length going to the HBA... Makes sense. Are yours the same length?
  14. FYI here are the results from a copy from ssd to ssd (Cache to Cache Raid 10) which is also connected to the hp sas expander. Me thinks I see a common thread. an x8 vs x4 test should show a doubling of speed.
  15. PCIe 3.0 x16 running in x8 i believe.