Jump to content

unrateable

Members
  • Content Count

    70
  • Joined

  • Last visited

Community Reputation

4 Neutral

About unrateable

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. is data on inside docker mounted array paths in danger ?
  2. in order to move the vm you need to move the vdisk1.img to the drive you want it on. then point under vm setting to it. Do I understand correct, that you have the vdisk on the array ? It is recommended to host the vdisk on a cache drive or unassigned drive which is separate from the array and outside of the unraid array usually better suited for i/o heavy vm use.
  3. I ´d use a simple script like that for exactly your example.
  4. you may want to take unraid away from the internet asap, and instead run a docker reverse proxy and restrict the ports; or openvpn docker - it is the easier way I guess
  5. you must also log in to airvpn website and select a port
  6. did you set up a forwarded port via the PIA website and set it accordingly inside the docker container yet ? my bad, I confused PIA with the way to set up airvpn
  7. ignoring bios level infections.... I am wondering, how would a root kit survive partition erasures and reformat ?
  8. unraid takes care of the split levels and follows the rules you set. as long as you move between shares. never move between disk(s) and shares. it may break things. there is no best way. Kruzer docker, mc via CLI, root share, or copy with vm as intermediary are popular ways...
  9. ..couldnt resist and updated the bios + swapped the nvme sockets on the asus hyper-x adaptor card few minutes ago. now I am using the two sockets next to each other and in numerical order. wohoooo the motherboard seems to like it; unraid reports the two nvme in two seperate iommu groups. clean and sweet. 😁 I conclude the reason of my issue was that the asus card signal routing wasnt connecting the two nvme ssds to the mobo PCIe slot (electr. x8 /mechanical x16) on the first 8 consecutive lanes.
  10. sorry, my bad, I am using this one.... I used auto and x4x4 but only one nvme shows up. the adaptor has room for 4 m.2 s . I have the 970 in the second and the wd black in the first. maybe two of the 4 wont work since the pcie slot is x8 max... ill try to swap (the wd) this upcoming weekend....
  11. I was thinking my board can do 2 slots bifurcated , however still doesnt explain why SLOT1 doesnt work, unless I also need to activate it for SLOT6 the same time..... could activating SR-IOV make a difference ?...hm
  12. I have the asus hyper-x nvme adaptor in the phys x16 (electrical x8) , slot1 of my board. supports bifurcation. both ssds are nvme x4 latest gen. tried with auto and x4x4 bios setting , in legacy OPROM mode, but unraid only recognizes one nvme (970) and not the 2nd (wd black) I wonder if I need to switch to EFI OPROM mode (whats that for anyway?) or use SLOT6 instead . unfortunately in SLOT6 is my double slot GPU which probably wont fit in SLOT1 and I wont be able to run the GPU x16 (although thats not that important since dif between x8 and x16 GPUs cant barely be noticed) ideas? ps: i hate that the tinkering messes up my vm xmls after each reboot but one´s got to pay the price... 😅
  13. any update available if this works for you ?
  14. unrateable

    Hardware

    I think your CPU has not enough threads. 2 for unraid, 2 for docker, leaves not much room for vms. if you can afford I suggest at least something like Intel Xeon E-2134 ; or do a ryzen build; or use cheaper last gen. hw and even more cores maybe ?