Jump to content

Magicmissle

Members
  • Posts

    41
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Magicmissle

  1. Personally this is why I moved away from unraid, even though you can get esxi to nest and function, it’s performance is a dog.

     

    Even when passing through devices such as NIC or storage the performance is sub par from what you would expect. It’s better to use a separate host for VMware vcenter and esxi. The only value of nesting esxi inside of unraid is for limited learning and basic basic basic basic basic lab work.

  2. 35 minutes ago, bowerandy said:

    Hi, I bought the same card and am passing it through to Proxmox 6.1-7. Unfortunately, when I start the VM receiving the passthrough, the whole of Proxmox hangs and I have to reboot the host. I'm just using the UI for the VM and passing the PCI card through like I do with the GPU but I assume there is something I'm missing here. @Magicmissle, can you give more details on how you achieved this please?


    @bowerandy I did have the freezing problem when I first started trying this I can’t remember specifically what the solution was but some of the things I changed were bios related to APCI and power management. I’m running a relatively old motherboard ASUS z10pe-d16-ws though, I haven’t upgrade because it lets me play with 512GB ram and 2x 22 core CPU’s.

     

    I did notice a huge impact moving from legacy booting to UEFI, also another adventure was figuring out that windows VM’s played much better running as OVMF especially when passing GPU’s or other PCIE devices. I spent years fine tuning unraid from 5.x-6.x and never really got the same performance:headache ratio with proxmox. It worked out of the box minus minor tweaks.

     

    I’m curious as to what your bootflags for grub are and more details about you system and configuration. I included some pictures so you can see how I’m passing my usb card to the gaming vm.

     

     

     

    4990FEF4-3C96-47EA-8B5F-C62ECD61C147.jpeg

    81ECC7F8-E861-4A82-9878-4F9D593519D5.png

    CF475C47-8F92-420A-A667-31CDA2044EA4.jpeg

  3. 4 minutes ago, Dent_ said:

    I know it is a little different but I have a threadripper 3960x running for almost two months with no problems so far. The only minor issue I am having is getting temps from cpu and that is apparently a driver issue(or lack of driver) for the newer sensors. 

    Would you mind sharing your boot options and bios tuning? Also are you running in EFI or legacy? I’m still tinkering with my 2950x on this x399 MSI gaming plus board, seems that whenever I saturate network or disks it results in a freeze requiring a power cycle. Thanks!

  4. 1 hour ago, CSIG1001 said:

    is it safe to say that unraid is safe with  a 3950x?

    I have no idea, it could be better now but from everything that I have had to deal with since mid 2018 with my threadripper 2 build I would seriously shy away from it.

     

    2 hours ago, bonienl said:

    Have a look at this

    Thank you for sharing, I have tried almost everything listed or mentioned here. I’m starting to believe it’s component related, but after 5 different intel 10gbe adapters and the constant freeze from network saturation I think I’m going to part it out to a family member.

     

  5. 26 minutes ago, mlapaglia said:

    What issues do you have? I'm moving over to a x570 board with a 3900x tonight.

    Stability mostly, I have 64gb and three titan x GPUs and had never been able to get stability out of it. I’m actually working on it right this minute if you want any specific info?

     

    here is from the last kernel panic a few minutes ago lmao 🤣 

    B3EB2340-2F44-4428-899D-63137D6B0C2D.jpeg

  6. Yes it is terrible but this setup is for only a performance test, and is not hosting any critical data. I was getting a lot of performance loss on my drives since everything is SSD, after introducing the adaptec raid controllers performance went through the roof, TRIM wasn’t the only advantage.

  7. 12 minutes ago, itimpi said:

    The array limits for each of the licence levels are detailed in the notes underneath the headline price part.
     

    Not quite sure of what you mean about the hardware raid groups?     As far as Unraid is concerned hardwire raid is invisible to it and all drives in a hardware raid group is presented as a single drive to Unraid    This mean all recovery of individual drives within such a group has to be handled by the hardware raid.   It also means you have to have a parity drive/group that is at least as large as the largest hardware raid group.   If you mean that you are going to break the groups down to individual drives then the limits for drives apply.


    I had my old unraid “unlimited” licensing changed when LimeTech made those changes to their licensing. I’m not the original thread op, but thought it was interesting.

     

    It depends on the adapters and also how the raid is exposed to unraid. In my case I use Adaptec 71605Q cards, those connect up to external sas expanders with multiple JBODs. As an example I have 3 raid10 groups with maxcache enabled, each raid group is 8 disks however it appears as a single drive in unraid.

     

    So as far as unraid is concerned it is only represented as 3 disks, one parity and two for the array. I get all the advantages this way however it reduces everything to a single point of failure - being the raid card itself, I guess my question is I am only limited to 30 of these “devices” ?
     

     

  8. 5 minutes ago, itimpi said:

    It depends on whether you are talking about ‘attached’ devices or ‘array’ devices?     The Pro licence has a limit of 30 for ‘array’ devices but no limit on ‘attached’ devices.   This is all specified on the Unraid Pricing page.

    Interesting 🧐 thanks great info? - since I assumed it was unlimited also for array devices. It does state “Unlimited attached storage devices” on the website listed for pro, right now I used hardware raid groups and then use those inside of unraid as array disks. So if I moved up to the 30 limit on raid pools I would be stuck?

  9. Years ago I was able to get my iodrive2’s to work but it was not solid, and everything was a nightmare. I think this was on 5.x and haven’t tried since them, I still have like 8 of them to play with but need to pull them out of the r720xd’s in storage. Do you have any other options or are you destined to use the fusion IO card?

  10. 1 minute ago, ddaavve said:

    The problem I'm having is I can't plug in the unraid server in with out taking down the network thinking I should reboot by hit reset button, but it will be a unclean restart? 

    Do you have any other network ports? I would try using one of those instead leave the 10gb card disconnected until you’re able to access the web GUI

     

    if this doesn’t work plug and you don’t have access to the unraid terminal directly or can’t ssh into it - plug your unraid usb stick into another computer and rename this file from /config/network.cfg to /config/network.cfg~old

     

    it will reset the network settings when you reboot the host, I don’t recommend forcefully shutting down but in many cases you might need to do just that if you have no access to do anything from shell or the GUI

    • Like 1
  11. I would check to see if it’s a network loop first, Perhaps a vm or docker is running as a dhcp server router on the same network vlan or subnet?


    I did have a problem with a faulty 4 port Broadcom 14 years ago that would bring down an entire switch but it was the only time I ever experienced an issue like that. It was related to the cards rom being loaded with a bad firmware image, I ended just tossing the card.

     

    Maybe try passing the card through to a vm to verify it works correctly? You can isolate it to it’s own vlan with a switch as not to disrupt traffic for everyone else?

    • Like 1
  12. +1 my reason is that I need the performance benefits from it being natively baked in vs emulating it from within a vm. Some of my use cases are very dependent on hardware accelerated iscsi and emulation reduces the performance vastly in high bandwidth low latency scenarios

  13. 9 hours ago, jonp said:

    Can't really answer that question.  It's not about quantity as much as its about use-case and applications.  I've never needed hugepages for any of my VMs because I'm not running databases, video encoding, etc.  The benefits of hugepages are pretty app-specific.  Linus saw the benefit specifically in a video he did where it featured a lot of video encoding being done and there was a pretty dramatic impact to overall performance as a result.

    Then your VM will give you an error upon trying to start it.

     

    What is important to remember is that to change your allocation for hugepages, you need to adjust the lines in SysLinux and reboot.  There are technically methods that allow you to do this while the machine is running, but I highly advise against that as you may not get contiguous memory allocation then.

    Thank you for the great information! I thought it was something deeper like a kernel option during compiling or something, I didn’t realize it was enabled already right out of the box!

     

    Is there any specific documentation about hugepages and VM tuning? Is it specific to OVMF or seabios, etc?

×
×
  • Create New...