1812

Members
  • Posts

    2625
  • Joined

  • Last visited

  • Days Won

    16

Everything posted by 1812

  1. I bought 2 cards from 2 separate people on ebay at 2 separate times and just dropped them in. One server is using it (in combination with the sas expander) to run a 7 disk 3.5 array, 2 cache disk, with 5 2.5 drives mounted via unassigned devices of which 2 sas drives are passed directly to a vm. It seems to have no problems running mixed disk sizes and types. Your mileage may vary but that how mine goes. There may be other cards with more sas cabling outputs on them, but I didn't have the need so I didn't look. I only have the sas expander in my server so I can access the external 3.5 array. Be sure to read on the device speed limitations using this card: 6 Gbps sas, 3 Gbps sata - https://www.hpe.com/h20195/v2/getpdf.aspx/c04111455.pdf?ver=3 (the biggest penalty being on sata ssd drives not being used to full capabilities)
  2. I use an h220 hba and a hp sas expander which also has a port on the back of the card to go to another array box. both were plug and play in my case. The expander won't get you to 9, but will do 6.
  3. does the r710 have any secondary power management? on hp products, it is iLo. I ask because there are power management settings in there that could be set to performance vs something less energy consuming even if there is no cpu scaling driver installed. also your bios may have similar settings....
  4. When trying to post a new topic, when the attached image exceeded the 320kb size, it would give you the error, you would then go and remove it or change to a smaller image, but clicking post did nothing, Preview would reload the preview.
  5. For a while I've been looking for a way to increase os x networking performance over the mediocre e1000-82545em we all use. I've even run my network connection over a ub3 card and ethernet adapter. And since the virtio-net virtual adapter delivers the same lackluster performance as the e1000, I went hunting around for alternatives. I ran across an article online (http://www.virtuallyghetto.com/2016/10/vmxnet3-driver-now-included-in-mac-os-x-10-11-el-capitan.html) which states that the VMXNET3 driver was supported for virtualized networking from 10.11. It details steps to get it working in el capitan. I'm currently running 10.12.x so I figured I'd give it shot. VMXNET3 is also enabled in unRaid. to enable in your vm, simply change <model type='e1000-82545em'/> to <model type='vmxnet3'/> I did not have to use terminal to get it working as sierra automatically loaded the appropriate kext. Both read and write performance greatly increased. The attached image shows differences between the e1000-82545em and vmxnet3 when reading/writing between the vm and my synology nas. Transfer rates were tested via smb drag and drop on a 3.2GB video file, and using Blackmagic disk test. This virtual adapter is suppose to support 10Gbe virtual connection, but I have yet to get it to work or display that connection speed. I believe it may have something to do with OS X wanting the "card" in a different slot but I could be way off, so any help with getting that figured out would be greatly appreciated. Otherwise, enjoy faster networking!
  6. I second the recommendation for these.
  7. probably the same smb error that is plaguing everyone else. Try updating to 6.3.1 or read the 6.3.0 release thread for suggested options.
  8. I like that. That way when copying things between unRaid servers I'm not crushing the main interfaces. Granted, drive access may be slowed if they grab something from that drive. I was thinking about grabbing a Quanta LB6M, and trunking the ethernet ports to my main ethernet switches, and switching all my servers over to IPoIB. Something like the LB6M is on my plan for future upgrades, but i'm pretty well set for the moment.... we'll just see how long the moment last..
  9. I essentially do this between unRaid 2 servers. Go to setting tab> network setting. Make sure the mellanox is not a bonding member. Scroll down to the interface that is the mellanox card. For me it is eth4 (eth0-3 being my 4 onboard gigabit ports) Set ip address assignment to static Set ip to something that isn't the rest of your network (I used 10.0.0.1 on one box, then 10.0.0.2 on the other) I set MTU to 9000 Click apply. Assuming you setup your networking properly on the vm/client side, then just connect to the server ip address via the one you defined. having 10gbe is a bit overkill for the unRaid array due to disk read/write limitations. But when moving things between cache drives, it does provide a nice bump.
  10. use an SAS expander with a cable out the back of the card to access another array/disk box. that's what i do for my 3.5 drives.
  11. thats a good question for the rsync gurus on here. I have another box for versioned backups with crash plan. I'm just leery of accidental deletion carrying over to the other copy.... so what I do works for me. A little slow at times (also bound by single core speed) but, I set it and forget it.
  12. have you tried using "turbo write" on the receiving box. For me, it's the difference between 40-50MB/s and 90-100MB/s (not using rsync, but in general writes)
  13. this qemu-system-x86_64: warning: Unknown firmware file in legacy mode: etc/msr_feature_control seems to be popping up around the board lately, and somewhat connected to 6.3 and its pre-releases... but oddly your vm stopped working before...
  14. I'm always a fan of "if it works, it works." I don't have any issues with latency in any of my vm's, just trying to get a better grip on what causes it with other folks and how they test for it.
  15. some audio/video issues can be resolved via cpu pinning/isolation. post your xml and cpu tread pairings.
  16. shots in the dark: try booting with 1 cpu can you boot windows in safe mode? latest VirtIO drivers? if the vm has data you need, try creating a new win10 vm, and once it' setup, add a copy of the problem image as a secondary vdisk (i don't like experimenting with an original when troubleshooting.)
  17. 1812

    My Proposed Build

    so, you post and inquiry, then a few days go by and only 1 of your questions is answered, you get pissy, get corrected about having an attitude, then get pissy again, and then plead again for help in spite of calling this place an unwelcoming and un-helping community. Maybe someone will take pity on you! You assumption that I posted my comment about response ratios as a proud fact is misguided and unfounded, based on your self-centric decision to read into what few responses you have gathered (and have not) as some greater conspiracy against you. I've only been around here for about 8 months. I've had posts of mine go unanswered. I didn't piss and moan about it and tell everyone off, alienating an online population. I asked again, and used occasionally used the magic word: please. You'll catch more bees with honey... and flies with... well.... your second post. I was originally going to respond to the parts of your questions I could address, but you instead decided to act like you did and I've now spent that time addressing issues other than your questions. But again, perhaps in this unwelcoming, un-helpful community maybe someone will take pity on you.
  18. 1812

    My Proposed Build

    Look at the other posts in this sub forum and compare the views:responses ratio. You're about even on the mark. unRaid is great software, and this is a great community. But with that stick up your butt, you're not improving your chances at getting a better response from it.
  19. 2nd parity if you don't have an urgent need for more cache, which, with what you've described, you don't.
  20. What does everyone use to measure system latency in windows? I was checking on a win10 vm I setup with 2 cores on my system (of 24 total) When using DPC Latency checker, I average about 1000 with very, very few intermittent jumps to 2200. But when I set an emulator pin, the average climbs to 1500 with multiple regular jumps to 8000+ When using LatencyMon with no emulator pin I get (in vertical order) 250, 390, 500, 502, 76, and then eventually peg out pagefault at 82259. when setting an emulator pin, i get: 380, 554, 777, 632, 894 and immediately peg hard pagefault at 170255. Using emulator pin seems to cause higher numbers in both latency tests.....? ----->Is the an expected result? Regardless of the number, I have zero issues using the windows vm. no laggy mouse, no audio/video problems. vm work without any issues. Just curious what others are using to determine latency on their win vm's.
  21. I watched the vid. It was great. But at the same time, Squid does make a valid point about docker volume mappings....
  22. It's up to you if you want to share the resources of the core or not. You don't have much available with only 4 cores....
  23. If you're booing into GUI mode, then yes, the isolcpus should be under both as you have put it. Maybe placebo effect for why it seemed "better" the first reboot. But either way, better is better!