• Posts

  • Joined

  • Last visited

  • Days Won


uldise last won the day on September 10 2018

uldise had the most liked content!

1 Follower


  • Gender
  • Location

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

uldise's Achievements


Collaborator (7/14)



  1. Yes, AMD Epyc have 128 PCIe links for one CPU, but Intel 64. i'm not about platinium or any other high-end CPUs..
  2. i just noticed, that this strange Icons are only on Compact layout. if i turn it off, then images are just fine.
  3. the same for me. Chrome browser on Ubuntu.
  4. that it is - ~20% of your CPU are this iowait. and this is not new - one of my unraid servers is still on 6.5.3, and it's the same.. i agree with that. for example, my system have high iowait when NFS is in use. when transferring the same over samba, no problems. not sure what is the reason of such behavior..
  5. run TOP command and search for CPU wait percentage. htop won't display it. as far as i know, unraid dashboard adds cpu wait too.
  6. how you update the firmware? do you need any special hardware to do that? i think i got the same expander as you from link you posted on great deals, but i still had not a chance to test it.
  7. it's a two different generation boards.. i recommend go for H12, and Rome or Milan CPU..
  8. if it's not in the container file system, then you can't map this. i'm talking about simple volume mapping, for example you can map /mnt/user/appdata/container/caintainer.log on host side with /container.log inside a container.
  9. you can map specific file instead of folder too.. so no need to map entire folder. it might be useful, when file is at root level inside the container.
  10. Container paths is wrong, see here: https://hub.docker.com/r/uldiseihenbergs/meshcentral
  11. then it's something wrong with volume mapping. all config files should be accessible by host in mapped folder.
  12. i will as you again - are your network 1Gbit? and you can run iperf3 on Esxi host too, see here: https://williamlam.com/2016/03/quick-tip-iperf-now-available-on-esxi.html
  13. @1da are your network 1Gbe? if yes, then ~200Mbit is really crap speed.. when i was on ESXi, and now on proxmox, i'm getting > 9Gbit on 10Gbit network - all default settings with one virtual nic with VMXnet3 Adapter type. so, looks like you need to resolve your network problems first...
  14. remove your NIC bound's at all and set MTU back to default, and test it with iperf3, to see that network is not a bottleneck..
  15. Thanks for that. i need a dual port one, and found one on serverschmiede.com