uldise

Members
  • Content Count

    942
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by uldise

  1. if it's not in the container file system, then you can't map this. i'm talking about simple volume mapping, for example you can map /mnt/user/appdata/container/caintainer.log on host side with /container.log inside a container.
  2. you can map specific file instead of folder too.. so no need to map entire folder. it might be useful, when file is at root level inside the container.
  3. Container paths is wrong, see here: https://hub.docker.com/r/uldiseihenbergs/meshcentral
  4. then it's something wrong with volume mapping. all config files should be accessible by host in mapped folder.
  5. i will as you again - are your network 1Gbit? and you can run iperf3 on Esxi host too, see here: https://williamlam.com/2016/03/quick-tip-iperf-now-available-on-esxi.html
  6. @1da are your network 1Gbe? if yes, then ~200Mbit is really crap speed.. when i was on ESXi, and now on proxmox, i'm getting > 9Gbit on 10Gbit network - all default settings with one virtual nic with VMXnet3 Adapter type. so, looks like you need to resolve your network problems first...
  7. remove your NIC bound's at all and set MTU back to default, and test it with iperf3, to see that network is not a bottleneck..
  8. Thanks for that. i need a dual port one, and found one on serverschmiede.com
  9. i'm interested in this too, but from Europe
  10. i'm running unraid under VMs for years - some years under ESXi, moved to Proxmox about year ago. no problems with performance degradation at all..
  11. @Mr_Jay84 there is typo on your port mappings - on last row container port should be 443 but you have 433
  12. @Mr_Jay84 seems like it's a first run, and you should access Webui with https://<yourLANIP>:466/
  13. for me it looks a bit overkill - such system with 10 drives at full load should not consume more that 200W..
  14. huh, i just cross-flashed my Dell H200 to LSI latest firmware on my Supermicro X8 server board - card is working as expected - connected HDD to test and it shows up as expected. then i moved this card to my desktop board - older Asus H97-Plus, and card is not visible at all - heartbeat Led not blinking, bios show no card in this slot. i tried both x16 slots to no avail. i have tried another desktop board - asrock fatal1ty z77 professional-m, and result is the same. i tried this duct tape mod to cover pins 5 and 6 with electric tape - didn't help either.. so i'm stuc
  15. there is a BIOS 1.5 for your board. maybe worth a try? make sure you have latest BMC before you upgrade BIOS..
  16. you have a new generation MB, so i don't think it CMOS.. have you tried a BMC/BIOS update if available?
  17. Do you have some backup server? if not, then build one. unraid parity is not a backup, and you should have a separate backup server for you most important data..
  18. if you wanna go with more than 24 drives, then look at supermicro chassis. this for example: https://www.supermicro.com/en/products/chassis/4U/847/SC847BE2C-R1K23LPB but it will be a way from your current budget i think. you can find these or similar refurbished too. BUT, as others suggested, a very big array with with just a two parity drives.. then just buy two separate servers and uses two unraids. OR, if you go with one server, then you can build a second unraid on top of unraid as VM - but all this adds more complexity. BTW, i'm running two small unraid arrays o
  19. looks good. every company do they own cable coloring. in my experience red cables are usually reverse ones. but if they clearly states, then should be fine.
  20. These red ones, are not they reverse breakout?
  21. have you any chance to use these cards with SR-IOV? i have ConnectX-2 and thinking of upgrade to get SR-IOV capabilities for my VMs.
  22. what exactly memory do you have? are they populated for both CPU? are two EPS power connectors connected to use both CPU?
  23. according UPS - just setup unraid as UPS listener and it will shut down itself
  24. i have two unraid VMs running on top of ProxMox for about a year. but i never have tested your case. if i have to shutdown/restart host, i shut down my unraid VMs always manually .. what is a case you need that? power outages?