Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by uldise

  1. according to motherboard manual, LAN2 is the right port for IPMI and it's a shared port. Are you using this port?
  2. yes, i'm running other VMs on ESXi too, and i started that way long time ago - unraid had not KVM support. and why broke things that work? IMHO, ESXi is very stable system, my ESXi host up-time is several months, and i don't need to restart host/other VMs, when i restart unraid.. so this configuration gives me more flexibility.
  3. if i look at my unraid server/settings/VM Manager, i get: "Your hardware does not have Intel VT-x or AMD-V capability. This is required to create VMs in KVM" this is a case with nested virtualization, i have not researched this further cos not needed for me. no issues at all. look at mu sig for detailed info on my hardware and unraid VMs.
  4. i'm running my two unraid servers over esxi for years, BUT this configuration is not supported. and i run all other VMs on ESXi too, so no nested virtualization - i agree with testdasi, this is not a good idea.
  5. keep in mind that typically you loose half of available PCIe lanes too. it depends on you MB, so read MB manual, cos i don't know how much cards you have in your system
  6. choose VMWARE Esxi then.. i have no experience with GPU passthrough, but other than that every thing works just fine, i have two unraid VMs, each with own HBA passed through.
  7. no worry about that. i'm trying to help everyone, if i can. And sorry for my English, it's not my native.
  8. Host path is correct, but Container path not. set container path for example /data2 and then on second part of previous instruction, add Local storage and choose /data2 as Location.
  9. you wanna just attach your Local unraid folder to nextcloud? then first, you need to map you unraid folder to docker path in your Nextcloud docker settings. Second, use "Local" storage and point it to mapped path inside docker container.
  10. while it can be true, Ipsec comes with some downsides IMHO - the first thing is you need open specific ports(1701, 500, 4500) to get it to work, and these ports may be blocked on client side. i have Ipsec configured in my house, and while i can access it from one office, i can't from another. i have some success with OpenVPN on top of pfsense - it works just fine on UDP port 443, so there will be no problems on client side to connect. and i can issue specific configuration for every user with their own certificates. And i'm using 2FA authentication with Radius server inside pfsense too - user have to use Google Authenticator for example to log in.
  11. did you read the first post of this thread?
  12. since 4011 have Ipsec Hardware Acceleration support, i recommend choose Ipsec/L2TP. but you can choose other types too.. for complete Ipsec/L2TP server how to, just google it. one of the first hits: https://blog.johannfenech.com/?p=1385
  13. you can setup this docker and test your disks again. https://forums.unraid.net/topic/70636-beta-6a-diskspeed-hard-drive-benchmarking-unraid-6/ you can test more disks at once to see how your controller is performing..
  14. there are so many topics about this question, so i will quote @johnnie.black: "Any LSI with a SAS2008/2308/3008 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, etc"
  15. it's a plain HBA, so no need to flash, it should work out of the box.
  16. I don't know, and this is why i'm asking you to test. when i add more drives to server, i always preclear it. with this procedure you can evaluate drive performance too.
  17. ok, just for a test - can you assign that drive to other Linux VM, and test it?
  18. how is your drive attached to the Host? and how unraid access them?
  19. i just hope, this will change sometimes, just to be sure unraid host is stable enough without a need to add additional packages by plugin maintainers..
  20. why not to go docker route?
  21. agree, i have a non-wifi version, and it works like a charm. and i use it with 10Gbit Passive DAC too (specs say it's not supported) to connect it with my main 10Gbit switch.