Magicmissle

Members
  • Posts

    41
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Magicmissle

  1. Personally this is why I moved away from unraid, even though you can get esxi to nest and function, it’s performance is a dog. Even when passing through devices such as NIC or storage the performance is sub par from what you would expect. It’s better to use a separate host for VMware vcenter and esxi. The only value of nesting esxi inside of unraid is for limited learning and basic basic basic basic basic lab work.
  2. you can now but you need to set the device bus type as VMware in xml
  3. @bowerandy I did have the freezing problem when I first started trying this I can’t remember specifically what the solution was but some of the things I changed were bios related to APCI and power management. I’m running a relatively old motherboard ASUS z10pe-d16-ws though, I haven’t upgrade because it lets me play with 512GB ram and 2x 22 core CPU’s. I did notice a huge impact moving from legacy booting to UEFI, also another adventure was figuring out that windows VM’s played much better running as OVMF especially when passing GPU’s or other PCIE devices. I spent years fine tuning unraid from 5.x-6.x and never really got the same performance:headache ratio with proxmox. It worked out of the box minus minor tweaks. I’m curious as to what your bootflags for grub are and more details about you system and configuration. I included some pictures so you can see how I’m passing my usb card to the gaming vm.
  4. this worked for me, I no longer use unraid but I tested it with proxmox 6.1-3 and I passed through a Fresco Logic FL1000 USB 3.0 Host Controller to my gaming vm and was able to enable Oculus Link for my Quest. No problemo
  5. I have actually moved away from unraid to proxmox due to native iscsi and NFS support. Miss unraid but proxmox fills my needs
  6. Would you mind sharing your boot options and bios tuning? Also are you running in EFI or legacy? I’m still tinkering with my 2950x on this x399 MSI gaming plus board, seems that whenever I saturate network or disks it results in a freeze requiring a power cycle. Thanks!
  7. I have no idea, it could be better now but from everything that I have had to deal with since mid 2018 with my threadripper 2 build I would seriously shy away from it.
  8. Stability mostly, I have 64gb and three titan x GPUs and had never been able to get stability out of it. I’m actually working on it right this minute if you want any specific info? here is from the last kernel panic a few minutes ago lmao 🤣
  9. 99% of my unraid builds are intel based, I have a 2950x build that makes me want to jump off a bridge on a weekly basis.
  10. Yes it is terrible but this setup is for only a performance test, and is not hosting any critical data. I was getting a lot of performance loss on my drives since everything is SSD, after introducing the adaptec raid controllers performance went through the roof, TRIM wasn’t the only advantage.
  11. I had my old unraid “unlimited” licensing changed when LimeTech made those changes to their licensing. I’m not the original thread op, but thought it was interesting. It depends on the adapters and also how the raid is exposed to unraid. In my case I use Adaptec 71605Q cards, those connect up to external sas expanders with multiple JBODs. As an example I have 3 raid10 groups with maxcache enabled, each raid group is 8 disks however it appears as a single drive in unraid. So as far as unraid is concerned it is only represented as 3 disks, one parity and two for the array. I get all the advantages this way however it reduces everything to a single point of failure - being the raid card itself, I guess my question is I am only limited to 30 of these “devices” ?
  12. Interesting 🧐 thanks great info? - since I assumed it was unlimited also for array devices. It does state “Unlimited attached storage devices” on the website listed for pro, right now I used hardware raid groups and then use those inside of unraid as array disks. So if I moved up to the 30 limit on raid pools I would be stuck?
  13. It’s unlimited there used to be a hard limit
  14. Years ago I was able to get my iodrive2’s to work but it was not solid, and everything was a nightmare. I think this was on 5.x and haven’t tried since them, I still have like 8 of them to play with but need to pull them out of the r720xd’s in storage. Do you have any other options or are you destined to use the fusion IO card?
  15. Have you flashed this card to IT mode? why? https://forums.servethehome.com/index.php?threads/what-is-it-mode.328/ this process is also for your card it will turn it into an HBA https://forums.servethehome.com/index.php?threads/flashing-hp-h220-sas-card-to-latest-fw.13057/
  16. I’m surprised no one mentioned checking the proliant power settings from the iLO https://techlibrary.hpe.com/docs/iss/proliant_uefi/UEFI_TM_030617/s_set_hp_power_regulator.html A or B settings should have an impact for you i also hit your thread about the p420
  17. This should be taken care of within the rules try this maybe? https://www.reddit.com/r/unRAID/comments/83ngly/docker_container_network_setting/
  18. here's a really good example and guide to give you hand https://www.spxlabs.com/blog/2019/8/11/10gb-networking-unraid-and-improved-workflow
  19. no problem happy to help! Just as a FYI you might to do a LACP connection with that nic and set the MTU to 9000 / enable jumbo frames if you switch supports it.
  20. Ok do you have a vm running that used the gpu, do you have terminal access to unraid directly?
  21. Do you have any other network ports? I would try using one of those instead leave the 10gb card disconnected until you’re able to access the web GUI if this doesn’t work plug and you don’t have access to the unraid terminal directly or can’t ssh into it - plug your unraid usb stick into another computer and rename this file from /config/network.cfg to /config/network.cfg~old it will reset the network settings when you reboot the host, I don’t recommend forcefully shutting down but in many cases you might need to do just that if you have no access to do anything from shell or the GUI
  22. putting this here for future reference https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt
  23. I would check to see if it’s a network loop first, Perhaps a vm or docker is running as a dhcp server router on the same network vlan or subnet? I did have a problem with a faulty 4 port Broadcom 14 years ago that would bring down an entire switch but it was the only time I ever experienced an issue like that. It was related to the cards rom being loaded with a bad firmware image, I ended just tossing the card. Maybe try passing the card through to a vm to verify it works correctly? You can isolate it to it’s own vlan with a switch as not to disrupt traffic for everyone else?
  24. +1 my reason is that I need the performance benefits from it being natively baked in vs emulating it from within a vm. Some of my use cases are very dependent on hardware accelerated iscsi and emulation reduces the performance vastly in high bandwidth low latency scenarios
  25. Thank you for the great information! I thought it was something deeper like a kernel option during compiling or something, I didn’t realize it was enabled already right out of the box! Is there any specific documentation about hugepages and VM tuning? Is it specific to OVMF or seabios, etc?