uldise

Members
  • Posts

    956
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by uldise

  1. i'm not sure about your question - if you add LSI card, then you will see group number. if it on it's own group then you should pass it to the VM.
  2. that's true - if you have such config, forget about SMART monitoring, disk spindown.. you must pass-through whole disk controller to get these features working. look at IOMMU group of you controller - if there are many devices in same group, you are in trouble. as per your pic, i see group "11" more than once. if you pass this one device, then all others in this group will be inaccessible to Host. there are some options to split PCIe devices to each own group, but it depends on you motherboard used.
  3. it depends.. if you have to restart unraid, then all your VMs/containers will do the same.. i'm running unraid on top of Proxmox, and it runs just fine - for me Proxmox is with more options cos it's pure debian. so on Proxmox VMs/LXC, while on unraid storage and Docker. if you run unraid as VM it's recoomended that you pass-through all needed devices to VM.. look at your hardware for this option..
  4. simply try safe mode, then remove incompatible plugin as requested..
  5. IMPORTANT: Activation Code redemptions cannot be processed until 11/30/2021 at 12:01 AM PST.
  6. it depends. look at Noctua as example. 120mm fans have more static pressure than 80mm.. comparison between various types of fans: https://noctua.at/en/nf-a12x25-performance-comparison-to-nf-f12-and-nf-s12a
  7. Yes, AMD Epyc have 128 PCIe links for one CPU, but Intel 64. i'm not about platinium or any other high-end CPUs..
  8. i just noticed, that this strange Icons are only on Compact layout. if i turn it off, then images are just fine.
  9. the same for me. Chrome browser on Ubuntu.
  10. that it is - ~20% of your CPU are this iowait. and this is not new - one of my unraid servers is still on 6.5.3, and it's the same.. i agree with that. for example, my system have high iowait when NFS is in use. when transferring the same over samba, no problems. not sure what is the reason of such behavior..
  11. run TOP command and search for CPU wait percentage. htop won't display it. as far as i know, unraid dashboard adds cpu wait too.
  12. how you update the firmware? do you need any special hardware to do that? i think i got the same expander as you from link you posted on great deals, but i still had not a chance to test it.
  13. it's a two different generation boards.. i recommend go for H12, and Rome or Milan CPU..
  14. if it's not in the container file system, then you can't map this. i'm talking about simple volume mapping, for example you can map /mnt/user/appdata/container/caintainer.log on host side with /container.log inside a container.
  15. you can map specific file instead of folder too.. so no need to map entire folder. it might be useful, when file is at root level inside the container.
  16. Container paths is wrong, see here: https://hub.docker.com/r/uldiseihenbergs/meshcentral
  17. then it's something wrong with volume mapping. all config files should be accessible by host in mapped folder.
  18. i will as you again - are your network 1Gbit? and you can run iperf3 on Esxi host too, see here: https://williamlam.com/2016/03/quick-tip-iperf-now-available-on-esxi.html
  19. @1da are your network 1Gbe? if yes, then ~200Mbit is really crap speed.. when i was on ESXi, and now on proxmox, i'm getting > 9Gbit on 10Gbit network - all default settings with one virtual nic with VMXnet3 Adapter type. so, looks like you need to resolve your network problems first...
  20. remove your NIC bound's at all and set MTU back to default, and test it with iperf3, to see that network is not a bottleneck..
  21. Thanks for that. i need a dual port one, and found one on serverschmiede.com
  22. i'm interested in this too, but from Europe
  23. i'm running unraid under VMs for years - some years under ESXi, moved to Proxmox about year ago. no problems with performance degradation at all..
  24. @Mr_Jay84 there is typo on your port mappings - on last row container port should be 443 but you have 433