escocrx

Members
  • Posts

    24
  • Joined

  • Last visited

Everything posted by escocrx

  1. Gave up because of JBOD? You can set the built in cards to HBA mode with the service pack disks. It won't work with the onboard program. I run ML350P gen 8. Too late now I guess but the file is P03093_001_spp-Gen8.1-SPPGen81.4.iso You can find it for download. Load it into the physical or virtual drive and you can set them to HBA. Update firmware etc with it. I have tons of pass through issues but I don't use VM's but to play so I just give up when I do. I can get the GFX and any add on cards to pass through but onboard things like the NIC will not pass through.
  2. old post. yes it works fine. 32 threads.384GB of ram. gtx 1650 for encoding. VM's can be a pain for pass through. Also PCI slots can't be "shared" so multi NVME host cards only see one NVME device. I wanted to run 4x NVME drives on one slot, no go. Too old for that. I have 19 storage devices attached. Mixture of SSDs and SAS HDs. ZFS pool. SSD Cache pool. and then XFS array. I actually retired it a year or so ago. However my replacement started having lockups I couldn't figure out. So it got brought back alive. Uses a good 300W at idle....
  3. OH btw. If you didn't know. If you don't have two CPUs installed. You don't get all the PCI or RAM slots.
  4. So kinda late but I've since switched to bare metal Unraid. ESXi limitations for the free version didn't allow me to utilize my 32 cores....but I had ESXi working great. It just limited me to 8 virtual cores for Unraid which surprisingly wasn't enough for my usage...ESXi SD card is still there but I have USB boot first. - Usb: FebSmart 4 Port PCI Express (PCIe) Superspeed USB 3.0 Card Adapter,2 Dedicated 5Gbps Channels 10Gbps Total Banwidth,Build in Self-Powered Technology,No Need Additional Power Supply (FS-2C-U4-Pro) If you are virtualizing and passing through the usb controller. Get a USB card that has multiple channels/controllers, it'll see them separately. Then you can utilize it better. Not limited to ESXi but Unraid as well. There are tons of controllers out there. - I don't mix LFF and SFF. If I said that I apologize. The only SFF I have are the SSDs. - LSI SAS 9211-8i 8-port 6Gb/s Internal (IT-MODE) ZFS JBOD HBA / (IR-MODE) RAID is the card I use. - My Unraid drives are on the LSI controller. I pass the entire controller through and Unraid sees it just fine. I have this cage. https://www.ebay.com/itm/HP-677433-001-ML350E-NHP-Drive-Cage/264486665179?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2060353.m1438.l2649 It is the Non hot plug version. Which the far right slots are just blocked off by a panel you can easily remove with a screw. I used an adapter to mount the SSDs to a LFF tray. ML350 SFF cages go for around 50-75 USD. I see DL380s go for about 25 USD. I'd honestly probably go all ML350E Non hot plug cage after the fact. Opens doors for more customization. I use a p830 controller with "wide sas" connectors that are SUPER hard to find. Finally got them from china. Also if you ever wanted to get a backplane you can just adapt the ML350 NHP with a couple screws and mount the backplane. The cage itself is identical. The main reason I used ESXi is because of my graphics card issues. I haven't been needing this recently so I have bare metal Unraid. One GREAT part of Unraid is that it doesn't freak out if you change systems as long as the USB drive is the same. I can boot into my ESXi and virtualize Unraid with no issues. It just takes away my CPU cores and changes the amount of available ram. Which I have 384GB to distribute so it's not a big deal. I will also say while figuring everything out being in an ESXi host is much easier to work with. The boot process with 384GB ram and multiple drive controllers isn't fast, feels like 5-10 minutes. Restarting a VM is seconds. I also have two NVME drives installed now. The pci slots cannot do bifucation? I think that's how you spell it. The chipset claims the ability but the motherboard does not work. I had dual NVME on one controller and it would only see one. So now I have it in two slots. I do not have the crazy fan issues. They do hover at 20% all the time but I'm fine with that. I might pull the "unsupported" cards out one day to see if it'll go down but right now I utilize those unsupported cards..It would simply be to test.
  5. just an update. 6.8.1 has doubled my user share speeds.
  6. he fixed it. try latest now. he updated it couple hours ago and it fixed all the issues for me. binhex/arch-sabnzbdvpn:2.3.9-3-03
  7. Only reason I run it as a guest on ESXi is because of VM issues. The main issue being GPU passthrough. I run ESXi via the onboard SD card. I run Unraid via a usb add on controller card. The usb addon allowed for me to pass it through without messing with the built in usb hosts. I do run VMs on Unraid still. I have a Win10 VM on Unraid. Runs flawless. I have recently started playing with spaceinvaders Macinabox as well on the nested Unraid. I just don't pass a GPU. I have passed GPU through ESXi to Unraid for the linux.io Nvidia version of Unraid and that does work for transcoding. I did not try to double passthrough though. Now that I think of it I'll probably try it for fun.
  8. After update this morning I now get errors. One being pgrep is not found. My error is related to the openvpn.sh script calling for it. I looked and it calls for it but when I go to console pgrep is not present. Also in this script. After removing it completely and reinstalling. 2020-01-03 14:14:43,197 DEBG 'watchdog-script' stderr output: /home/nobody/sabnzbd.sh: line 14: pgrep: command not found 2020-01-03 14:14:43,197 DEBG 'watchdog-script' stderr output: /home/nobody/sabnzbd.sh: line 14: pgrep: command not found 2020-01-03 14:14:44,201 DEBG 'watchdog-script' stderr output: /home/nobody/sabnzbd.sh: line 14: pgrep: command not found 2020-01-03 14:14:45,203 DEBG 'watchdog-script' stderr output: /home/nobody/sabnzbd.sh: line 14: pgrep: command not found 2020-01-03 14:14:45,203 DEBG 'watchdog-script' stderr output: /home/nobody/sabnzbd.sh: line 14: pgrep: command not found Pulled binhex/arch-sabnzbdvpn:2.3.9-1-07. Fixed my issue.
  9. You can pass the main gpu to VM's if you don't use the GUI portion of unraid.
  10. ml350e LFF cages are 4 slot with blocked off 2 slots that just unscrew.. So 6 LFF if you go that route. It's ACTUALLY the same cage as the ml350p cages without backplane. You can actually get the ml350e and mount the backplane to make it a hot swap drive cage.
  11. Direct I/O actually caused issues. The larger the file, it would run out of steam and freeze the transfer until it seems to have caught back up.
  12. I tried the direct i/o. Same results. Disk share fast. User share slow.
  13. I only used it to test. It was a post about slow speeds and they asked for drive share and iperf results. So I did it before hand. I forget who it was but they also said do not recommend using it.
  14. Hi, I have 10Gb local network. When using User shares I only get 250-300MB/s. If I use Drive shares I get the full speed of my devices. For example my cache is SSD I can get 600-700MB/s steady when utilizing the Drive share. Is there any settings I can check out to enhance User share transfer speeds? MTU/Jumbo does nothing. Is overhead of User shares that much?? I have used iperf to check speeds. I get 7Gbit steady. So my network isn't perfect but it's still better than 250MB/s. Thanks
  15. So...my server is a ML350p Gen8. CPU: 2x E5 2667v2 (originally E5 2670) MEM: 128GB 1600Mhz RDIMM HPE HD Controllers: p420i 2GB FBWC in raid with 8x2tb. LSI SAS 9211-8i HBA with 4x12tb spinners 3x480gb SSD NIC: HPE ETHERNET 10GB 2-PORT 561T (onboard 4port 1gb not in use) Now why I started with So... Getting hard drive cages was a pain...They are super overpriced and hard to get working together. Gen8 they stopped using SAS expander add on cards and integrated the expanders into the cage backplanes. I however found decent deals on cages that did NOT have the expander backplane but the regular backplane....Onboard p420i cannot handle more than one regular backplane cage...I did not know this. However after hours of research I found that the p830 controller can do this with 68pin "wide" sas to dual sas connectors. Again however you can only do two "regular backplane" drive cages in this setup. If you can get the expander style cages then you can use the onboard p420 for up to 3 cages either 3x8 2.5 SFF or 3x6 3.5 LFF setups. Some places want 500$ USD or more for these cages. Gen8 is Gen8...you can use ML350E cages however which are non SAS backplane and great for HBA addons. You have to use the trays as well. I use hotswap trays for all my drives as they were cheaper than NON hot swap trays....ML350E cages are cheaper and fit but no SAS backplane. Everything is "enterprise" and proprietary as can be. Your add on cards will make the system react weird because it's not "HP" branded cards. https://h20195.www2.hpe.com/v2/getdocument.aspx?docname=c04128239 if the part number isn't listed on here. It's going to make your system funky...but it will still work. Mine just has elevated fan speeds. They won't go under 20% speed. IF I remove all the unbranded add on cards it'll idle at around 6-10% fan speed. Fans are proprietary and MUST be installed. Especially if you have dual CPU. It will run at full speed if you take one fan out or if it fails. At full speed it sounds like a jet. You can probably rig other coolers but why? The box has a plastic air diverter to fit perfectly in the box. It works great at cooling. I cannot for the life of me get VM GPU passthrough to work properly on this system in Unraid. It will force reboots to reset the cards and yes I have read the community extensively and watched spaceinvaders great vids.. Same GPU on a intel b250 chipset system works flawless for VMs.... Right now I run ESXi 6.5 with unraid as a guest. I use ESXi for all my guests and is great. I passed through the LSI HBA, one of the 10G nic ports and away it goes. No hiccups. In ESXi 6.5 at least you can boot via USB like normal. Just add USB device. In 6.0 and 5.5 it doesn't seem to work right but I can get it to work, just have to manually hit enter on guest bios. You're in the enterprise HPE world where drivers and support is PAY. Gen8 is out of warranty and unless you have a paid contract for support you can't even download drivers without a headache... Processors can be had for 40 bux. Ram is kind of expensive but can be had for about 20-40$ per 16GB. SAS drives can be had for 10-20bux per 2TB. I think I paid 150 shipped for 10x2TB. My systems a beast but it wasn't cheap. Same $$$ I'd probably build something else next time and get more, I mean no native Usb3.0 and DDR3 ram? It's old. However I'm happy with my setup. Enterprise level equipment has so many awesome features and pluggability. Once your drive cages are mounted it's a tool less system. I can answer more questions. I'm sorry if I jumped all over. I just went through this gen8 setup in the last couple months. It's a headache I'd recommend skipping for the impatient.
  16. Super old original and old latest. But...just in case someone else is looking. It worked for me. https://github.com/im-0/hpsahba
  17. Thanks Frank, yea that's what I gathered too. I got the disable checksumming tip from google. It worked for a few days and now it doesn't care if I disable it. No biggy. I'll be picking up a "known" linux friendly device. Just thought I'd be able to get away with the 40 dollar special.
  18. I have a Tehuti Networks Ltd. TN9710P 10GBase-T/NBASE-T Ethernet Adapter based NIC. It works but in my use it's be buggy. Randomly disappears and overwhelms my syslog with errors. I have it disabled for now. It is fast when it wants to work. It is Rosewill brand. Cost me roughly 40 bux. In my bare metal windows box it runs great.
  19. It has a H67 chipset I believe. https://ark.intel.com/content/www/us/en/ark/products/52807/intel-h67-express-chipset.html says Vt-D No...
  20. Well I thought my fix worked. It had for the last few days but now it get the error in bursts of about 40-50 every 15 minutes it seems. I'll just disable it again. I do note that it seems to work fine even with the errors but if I look at syslog it'll hang..
  21. Hi, When utilizing the topic'd NIC Tehuti Networks Ltd. TN9710P 10GBase-T/NBASE-T Ethernet Adapter. tn40xx device on 6.8.0. My system log will get blasted forever with "kernel: tn40xx: rxd_err = 0x28". However in the syslog of the diag it only shows it once at the very end. So much so that if I try to view the syslog via browser it will lock my server for roughly 5 minutes. I can stop it from reporting the error utilizing ethtool -K ethX rx off. It'll come back if I stop/start the array or reboot. Some kind of issue with the TOE features? This is one of the lower end 10G NICs out there. I got mine for roughly $40 USD. My specific device didn't even work until 6.8.0. So I was super happy when I saw it came alive in 6.8! No more SMB multichannel tricks to get the speeds I'm looking for. Can I just disable the TOE checksumming on boot for just this card? Thanks for any input or direction. tower-diagnostics-20191223-1709.zip
  22. Hi, So I noticed someone else lost connectivity after switching to LSIO. I lose a NIC after the change to LSIO. I've attached my diags from stock and lsio. Go back to stock it comes back. I will note 6.7 didn't have support for the NIC. Only since 6.8 did it start working. tn40xx is the device. [1fc9:4027] 10:00.0 Ethernet controller: Tehuti Networks Ltd. TN9710P 10GBase-T/NBASE-T Ethernet Adapter Thanks for any help! tower-diagnostics-20191219-1547-lsio.zip tower-diagnostics-20191219-1558-stock.zip