escocrx

Members
  • Posts

    24
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

escocrx's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Gave up because of JBOD? You can set the built in cards to HBA mode with the service pack disks. It won't work with the onboard program. I run ML350P gen 8. Too late now I guess but the file is P03093_001_spp-Gen8.1-SPPGen81.4.iso You can find it for download. Load it into the physical or virtual drive and you can set them to HBA. Update firmware etc with it. I have tons of pass through issues but I don't use VM's but to play so I just give up when I do. I can get the GFX and any add on cards to pass through but onboard things like the NIC will not pass through.
  2. old post. yes it works fine. 32 threads.384GB of ram. gtx 1650 for encoding. VM's can be a pain for pass through. Also PCI slots can't be "shared" so multi NVME host cards only see one NVME device. I wanted to run 4x NVME drives on one slot, no go. Too old for that. I have 19 storage devices attached. Mixture of SSDs and SAS HDs. ZFS pool. SSD Cache pool. and then XFS array. I actually retired it a year or so ago. However my replacement started having lockups I couldn't figure out. So it got brought back alive. Uses a good 300W at idle....
  3. OH btw. If you didn't know. If you don't have two CPUs installed. You don't get all the PCI or RAM slots.
  4. So kinda late but I've since switched to bare metal Unraid. ESXi limitations for the free version didn't allow me to utilize my 32 cores....but I had ESXi working great. It just limited me to 8 virtual cores for Unraid which surprisingly wasn't enough for my usage...ESXi SD card is still there but I have USB boot first. - Usb: FebSmart 4 Port PCI Express (PCIe) Superspeed USB 3.0 Card Adapter,2 Dedicated 5Gbps Channels 10Gbps Total Banwidth,Build in Self-Powered Technology,No Need Additional Power Supply (FS-2C-U4-Pro) If you are virtualizing and passing through the usb controller. Get a USB card that has multiple channels/controllers, it'll see them separately. Then you can utilize it better. Not limited to ESXi but Unraid as well. There are tons of controllers out there. - I don't mix LFF and SFF. If I said that I apologize. The only SFF I have are the SSDs. - LSI SAS 9211-8i 8-port 6Gb/s Internal (IT-MODE) ZFS JBOD HBA / (IR-MODE) RAID is the card I use. - My Unraid drives are on the LSI controller. I pass the entire controller through and Unraid sees it just fine. I have this cage. https://www.ebay.com/itm/HP-677433-001-ML350E-NHP-Drive-Cage/264486665179?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2060353.m1438.l2649 It is the Non hot plug version. Which the far right slots are just blocked off by a panel you can easily remove with a screw. I used an adapter to mount the SSDs to a LFF tray. ML350 SFF cages go for around 50-75 USD. I see DL380s go for about 25 USD. I'd honestly probably go all ML350E Non hot plug cage after the fact. Opens doors for more customization. I use a p830 controller with "wide sas" connectors that are SUPER hard to find. Finally got them from china. Also if you ever wanted to get a backplane you can just adapt the ML350 NHP with a couple screws and mount the backplane. The cage itself is identical. The main reason I used ESXi is because of my graphics card issues. I haven't been needing this recently so I have bare metal Unraid. One GREAT part of Unraid is that it doesn't freak out if you change systems as long as the USB drive is the same. I can boot into my ESXi and virtualize Unraid with no issues. It just takes away my CPU cores and changes the amount of available ram. Which I have 384GB to distribute so it's not a big deal. I will also say while figuring everything out being in an ESXi host is much easier to work with. The boot process with 384GB ram and multiple drive controllers isn't fast, feels like 5-10 minutes. Restarting a VM is seconds. I also have two NVME drives installed now. The pci slots cannot do bifucation? I think that's how you spell it. The chipset claims the ability but the motherboard does not work. I had dual NVME on one controller and it would only see one. So now I have it in two slots. I do not have the crazy fan issues. They do hover at 20% all the time but I'm fine with that. I might pull the "unsupported" cards out one day to see if it'll go down but right now I utilize those unsupported cards..It would simply be to test.
  5. just an update. 6.8.1 has doubled my user share speeds.
  6. he fixed it. try latest now. he updated it couple hours ago and it fixed all the issues for me. binhex/arch-sabnzbdvpn:2.3.9-3-03
  7. Only reason I run it as a guest on ESXi is because of VM issues. The main issue being GPU passthrough. I run ESXi via the onboard SD card. I run Unraid via a usb add on controller card. The usb addon allowed for me to pass it through without messing with the built in usb hosts. I do run VMs on Unraid still. I have a Win10 VM on Unraid. Runs flawless. I have recently started playing with spaceinvaders Macinabox as well on the nested Unraid. I just don't pass a GPU. I have passed GPU through ESXi to Unraid for the linux.io Nvidia version of Unraid and that does work for transcoding. I did not try to double passthrough though. Now that I think of it I'll probably try it for fun.
  8. After update this morning I now get errors. One being pgrep is not found. My error is related to the openvpn.sh script calling for it. I looked and it calls for it but when I go to console pgrep is not present. Also in this script. After removing it completely and reinstalling. 2020-01-03 14:14:43,197 DEBG 'watchdog-script' stderr output: /home/nobody/sabnzbd.sh: line 14: pgrep: command not found 2020-01-03 14:14:43,197 DEBG 'watchdog-script' stderr output: /home/nobody/sabnzbd.sh: line 14: pgrep: command not found 2020-01-03 14:14:44,201 DEBG 'watchdog-script' stderr output: /home/nobody/sabnzbd.sh: line 14: pgrep: command not found 2020-01-03 14:14:45,203 DEBG 'watchdog-script' stderr output: /home/nobody/sabnzbd.sh: line 14: pgrep: command not found 2020-01-03 14:14:45,203 DEBG 'watchdog-script' stderr output: /home/nobody/sabnzbd.sh: line 14: pgrep: command not found Pulled binhex/arch-sabnzbdvpn:2.3.9-1-07. Fixed my issue.
  9. You can pass the main gpu to VM's if you don't use the GUI portion of unraid.
  10. ml350e LFF cages are 4 slot with blocked off 2 slots that just unscrew.. So 6 LFF if you go that route. It's ACTUALLY the same cage as the ml350p cages without backplane. You can actually get the ml350e and mount the backplane to make it a hot swap drive cage.
  11. Direct I/O actually caused issues. The larger the file, it would run out of steam and freeze the transfer until it seems to have caught back up.
  12. I tried the direct i/o. Same results. Disk share fast. User share slow.
  13. I only used it to test. It was a post about slow speeds and they asked for drive share and iperf results. So I did it before hand. I forget who it was but they also said do not recommend using it.
  14. Hi, I have 10Gb local network. When using User shares I only get 250-300MB/s. If I use Drive shares I get the full speed of my devices. For example my cache is SSD I can get 600-700MB/s steady when utilizing the Drive share. Is there any settings I can check out to enhance User share transfer speeds? MTU/Jumbo does nothing. Is overhead of User shares that much?? I have used iperf to check speeds. I get 7Gbit steady. So my network isn't perfect but it's still better than 250MB/s. Thanks