Tuftuf

Members
  • Posts

    248
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Tuftuf

  1. Although I do agree on the suggestion of just stick it on the open internet and it's not required to put it through a VPN. UPnP does not always work. But setting a port forward to Plex is easy, just make sure to use plex's port internally as its hardcoded. If using a router or firewall to do the encryption for VPN your PC will not take a hit, even then a decent pc can handle VPN traffic of a few video streams. If only using a VPN at the server end, (e.g using AirVPN or something with portforward support) the client would not need to decrypt anything, it would only be on VPN leaving your house to the public internet. -- Note the only improvement here is people cannot see your real IP when accessing plex, other than that you still have a port forward that ends up on your plex machine.
  2. I would be less concerned with Plex been on the Open internet compared to 80, 443 directed at Unraid..
  3. One thing to be aware of if you create a docker to run the openvpn service you will need to add a network route back to the newly created vpn network via that new docker otherwise your machines will not know how to find the new network. These can be added per machine or on your router.
  4. Unraid is using KVM for VM's if you google about running ESXi under KVM. There is an option needed for Unraid to do this but it's on the fourm somewhere, So the VM can run as hypervisor, it is possible. But i'm can't comment how well it will run. If it wasn't for the fiber channel card I would of suggested a tiny little i7 with 2c4t, slient esxi box. Low power usage, you can build something small. I still think a mini server about small switch size is the way to go. Otherwise you are back to that 4th disk for ESXi, a USB stick for boot and the fiber channel. Not that i like that idea myself. You do need a Mobo that can support 2 GPU's + enough lanes left for the fiber channel card. Assuming you are using all the onboard sata ports for the SSD's.
  5. First, decide what you mean by remote access my unraid server, does that mean ssh access to the machine, web access to view the UI and make changes or access to shares or services you are running. --Let me be clear I'm not suggesting you setup a port forward to SSH or GUI access to unraid on your internet or VPN connection but it's an example.-- Take ssh as an example you can port forward on your real public IP and use a DDNS service to give yourself a static DNS which will update to your IP. -- This is not really the most secure way but it's an example. I use AirVPN and the same could be done there. I can select a port forward and that arrives the same way as if it was my real IP address. This includes a DDNS service but you could use any online DDNS provider. To answer your question, really you should be setting up a VPN server either on your router or potentially as a docker on Unraid. I don't really see a problem with opening a port on your public IP (not VPN) to host a VPN service for yourself only. That's for you to decide, using DDNS as described above will still work in this case. I thought PIA did support port forwarding, but I don't use them so can't really comment, anyway I myself would run it on your public IP not the VPN IP. EDIT - If running Teamviewer within a VM suits your need. It's likely a good option. Or use OpenVPN and the only port forward will be to the OpenVPN server then you'll need your details to connect similar to what you have for PIA.
  6. I thought you wanted to avoid having four disks in the system. Unraid will see all disks that are presented to it but it will not use any disk unless you select it. There's no risk that it will randomly mount disks. That being said it's then up to you to ensure you don't select the wrong disk in either ESXi or Unraid. EDIT- It will also save that config for each reboot and inform you if there are any other disks not currently handled by Unraid. It's very clear which is it's disks. The main reason I suggested a separate system for ESXi is that Unraid as a real purpose is a NAS + all the other fun stuff. Dual booting with Windows for gaming due to an issue with gaming within a VM. Yes, I can understand that. Tripple boot between Unraid, ESXi, and Windows to me personally seems a waste but that is only because I wouldn't want to remove my home media center which also runs many other tasks for me so often or I wouldn't want to be unable to access some personal files which might still be useful at work such as notes I've taken while running the ESXi system. I would hope you wouldn't be gaming while working! Plex Server, TVHeadend (also doing recordings for the family off air TV), Splunk actually taking some notes on what is happening with the things I'm running at home. Downloads etc etc.. I couldn't even dual boot with this, but due to Steam/VAC would be forced to. Apart from the fact I have a performance issue with VM's due to AMD etc etc. On top of the Unraid and other network services i'm running 1x Gaming/Windows 10 VM 24/7a I have a second Windows VM which is used for Red Alert.. it's only for a firepro but yes with a better GPU it would be another similar to first VM. I've used shield STB to play from the first machine, but I had other things to get working. My plan is to get it running from the second machine when I replace the GPU for gaming downstairs without touching my first windows desktop/gaming vm. Yes you can select with VM can see which GPU, sound card etc. This is dependant on the Mobo/CPU setup you have.
  7. Once you allow Unraid to see the physical disks you shouldn't use them with another OS. Technically they could be mounted in other systems but not in the ways you wish. Data drives could be mounted as XFS but you cannot write to them or parity will be broken. I would assume cache drives as btrfs could be mounted but again you are looking for windows or esxi compatible solutions so that doesn't work. If you split the drives using a raid controller prior to booting and presented them as different drives to ESXi and Unraid. Then maybe... it would be close to what you are looking for. But I really don't see the purpose to building the system this way. Shouldn't you be thinking of using ESXi for the VM's and GPU passthrough? I've used ESXi and the rest that goes with it but never tried GPU passthrough on it. Depending on your requirements for the ESXi lab, you might want to look into putting that on a separate little mini server, purpose built since I take it you want to keep the GPU's close to the Gaming VM and Physical windows 10 install.
  8. It does seem to be something that should just be there as an option. I'm already running Splunk as a docker on unraid and soon I'll have a second unraid box after my microserver is retired from its pfsense duties.
  9. I can see you are trying to get a better understanding, but you are taking a big leap without understanding how some of it all works. You might really look into what Unraid is, how it installs and generally how it uses disks. You said about installing Unraid booting to dual SSD in raid 1. Unraid would normally boot from a USB stick which either way is required for licensing, I expect the ESXi installs are passing the USB stick through although I've not checked that. Generally, Unraid has control of its own disks, but there are other options. You wouldn't pass through a RAID 1 and then use it as a data disk for that you may as well setup a linux machine and create an NFS share while using KVM for some VMs. Doesn't Nivida gamestream require you to have the GPU to stream the game in the first place. Are you asking IS this possible without a GPU? You can import other games to steam that has already been installed, sometimes it doesn't work so keep a backup of the data. google is your friend here. Sharing between two installs I've not had working, one stream always thought it needed to reinstall. Might be possible, not sure. I've seen guides for installing OSX... The GPU's in a system does not have to match. I'm sure I've seen some issue with matching cards in some systems.
  10. Which would indicate you have not read the thread since I have already made clear that performance is not great, and your comment seems to assume I'm suggesting the performance is near native.
  11. I kinda feel like this has been answered in this thread many times already.
  12. I understand where the difference is now. I was looking for a /config folder within bixhex-rtorrentvpn instead of making the connection in my head that /config actually means /mnt/user/appdata/binhex-rtorrentvpn I did use the default mapping of /config but I was looking to access the folder via the unraid system to just copy the ovpn file, and that path is, of course, the path I set to be linked to /config within the container.
  13. @binhex On your github docker page for this docker it shows the following for openvpn config. Stop rtorrentvpn docker and copy the saved ovpn file to the /config/openvpn/ folder on the host But the path is simply /openvpn. On my system it shows it as /mnt/user/appdata/binhex-rtorrentvpn/openvpn It seems to be working though as far as VPN and port forwarding goes, I'll test integration with other apps soon.
  14. @binhex If using this as intended with the docker bridge, should I be able to set to any subnet as the LAN_Network or am I limited to the subnet that the bridge is running on? Is the WebUI port the only port Sonnar requires?, if so then it should have worked. I had it to the point I could connect from another subnet other than the bridge using the WebUI but Sonnar refused to connect. I'm thinking this is a problem on my side as Sonnar and my laptop were in the same subnet at that point. I actually have two VPN services, VLAN 300 is IPVanish I wanted to place it on that subnet and then use AirVPN within the docker for downloads(due to portforward) but not so concerns about leaks as it'll only be able to go out via IPVanish. I'm about to setup a pfsense install and trying to get things in place, not the best diagram but gives a good idea https://www.gliffy.com/go/publish/12156089 It's a bit messy with both pfsense installs, but I don't want to break things until everything is ready.
  15. I've got this installed but I've only been able to connect to it in Bridge mode. Unraid is running on 172.19.1.219 Default bridge option will run docker nat using unraids interface. If I set the LAN_Network to 172.19.90.0/24 then I can't access the UI andRaddar & Sonnar can't connect (fails instantly) - From the 90.x network. If I set the LAN_Network to 172.19.1.0/24 then I can access the UI but Raddar & Sonnar can't connect (fails instantly) - From either the 1.x or 90.x network. I normally put these in a separate network 172.19.1.90. When I set the docker to actually be on the 90.x network and have it's own IP address. I can't connect at all. This was the setup I wanted to use. Is this intended to be used with Bridge only?
  16. I understand it's meant to save the settings, so if you use the app and set it up and then boot into unraid you'll be good to go. I also have a windows VM that I can run the app in.
  17. The only downside for my Ryzen system at the moment is that gaming performance is not perfect within a VM, some games run ok with a slight stall every so often. My other games that use the CPU more, stall more often. I still class them as playable but it is causing me problems. I do see a good FPS increase with the npt setting but my vms are are worse in other ways (known issue). Plus the c-state thing but the workaround is acceptable for now.
  18. Is there much of a preference for running you own cloud software? Mainly looking to sync desktops and documents between my laptops.
  19. I'm rebuilding my network at home and finally thinking about placing devices such as cameras or hive within a dmz. I'll be getting FTTH in about 2 months and slowly getting ready for that, still it'll only be 250/30 but its more than the DSL connection I'm on now! Any suggestions on what issues I face? It doesn't seem the best idea to place Unraid on all the vlans, but also I recall reading something about Unraid is intended only to be accessed off its management IP and the vlans are for use with VM's and Dockers. Plex will need access to the media on Unraid and need to provide access to machines on the local network.This might make it difficult to place plex in the DMZ. CCTV Cameras only need to write to a VM locally, that VM(Blueiris will need access to Unraid shares to write but I also want to open it up to the internet and not allow that machine to access anything else. Not on the diagram, but similar for TVHeadend, it'll need local and internet access and write recordings to Unraid shares. Phones, Tablets, Kodi. These will be on the LAN but need access to Plex, TVHeadend.. might end up placing all access at home through a VPN. Work VM's will need internet access and access to a Unraid Share. But not access to the rest... Trying to avoid all the routing been done by PfSense, but starting to look at this and think it might be the best option for the moment. Another option to create VM's in each of the VLANS and only offer the shares required on that network, but this is adding another layer on top of Unraid which I would like to avoid. Any comments about where people normally place Plex etc and handle unraid shares when they have multiple networks ?
  20. Running it in a vm, very limited resources assigned to it. 2vCores and 4gb ram. Currently only recording 3 cameras d2d but also have another 8 at low resolution I often record from a remote site without issue. Also running BI at that remote site on core 2 duo with 8gb ram recording 8 cameras. When not transcoding it will run on almost anything.
  21. You can pass through the nivida cards if you follow the guide to download and then supply the nivida rom within the vm's xml file. Also your first x16 will most likely be the primary gpu slot, I'd be surprised if you could assign primary gpu to a x1 slot. Anyway, Passing through your Primary GPU on Ryzen works if you follow the guides already on the forum. So what you see as an issue, is not actually an issue.
  22. The current limitations have not reduced how useful my new system is. More than happy with the current status. the system is stable when I'm traveling and that's the main part for me.
  23. For cache pool default is raid 1 as already mentioned but it is possible to change this to raid 0 if really needed. You would need 10Gb Ethernet to take advantage of such a setup. Generally standard configuration would be good enough. The term cache drive refers to the option Cache drive within Unraid, this is generally a faster drive that sits outside the protected array and you can decide if a share uses the Array, Cache only or a combination of both (This includes a function to move content to the array during the night). There are slightly more options but this a high level description. You also have the option for unassigned drives, this is useful if you want to use the Cache + Mover function for general writes to the array (actually in this mode it's first written to cache and then later moved to the array) and use the unassigned drive option to provide the SSD only for a certain project or projects. But this drive is outside of the array and it will be up to you to move the data to the array when ready. Also running blue iris in an Windows10 VM uses very little resources, i use D2D writes and only provide it 2 vCores. I'm sure it would like more but I've had no issue so far.
  24. During general use or games I get audio dropouts, the sound just disappears and comes back 5-15seconds later. I'm using a USB headset passed through to the VM. I did not have the audio issue on Seabios before I moved over to OVMF and uefi boot. Only recently added <hap/> to the config, I've had mixed results with it and might remove it. If I set npt=0 my FPS will increase from 80 to 140 but the CPU is always reaching 100% and causing pauses/slowdowns and FPS drops. With npt=1 I get around 80 FPS and its stable, although other games its much less. This is based on overwatch that often gives high FPS compared to most games. CPU also spikes when booting windows all cores top at 100% and when idle. Had a few boots where windows doesn't finish booting while using npt=0. Normally my cpu are assigned as 8,10,12,14,9,11,13,15 but it doesn't make a lot of difference to performance. 8-15 are isolated from unraid Open to suggestions, thanks. This is my XML at the moment. <domain type='kvm'> <name>Windows 10 One</name> <uuid>fd3728b7-e885-f11f-9e3d-910358b4b43c</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='8'/> <vcpupin vcpu='1' cpuset='9'/> <vcpupin vcpu='2' cpuset='10'/> <vcpupin vcpu='3' cpuset='11'/> <vcpupin vcpu='4' cpuset='12'/> <vcpupin vcpu='5' cpuset='13'/> <vcpupin vcpu='6' cpuset='14'/> <vcpupin vcpu='7' cpuset='15'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-2.9'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/fd3728b7-e885-f11f-9e3d-910358b4b43c_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hap/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='8' threads='1'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/WDC WDS500G1B0A/Windows 10 One/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/WDC WDS500G1B0A/VM_Cache/WindowsGaming.img'/> <target dev='hdd' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/VM_Cache/14393.0.160715-1616.RS1_RELEASE_CLIENTENTERPRISE_S_EVAL_X64FRE_EN-GB.ISO'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/VM_Cache/virtio-win-0.1.126-2.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:ea:5c:ab'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0c' slot='0x00' function='0x0'/> </source> <rom file='/boot/GBWF670UEFI.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0c' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x047f'/> <product id='0x02f7'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1532'/> <product id='0x0016'/> </source> <address type='usb' bus='0' port='2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1b1c'/> <product id='0x0c09'/> </source> <address type='usb' bus='0' port='3'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1c4f'/> <product id='0x0002'/> </source> <address type='usb' bus='0' port='4'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </memballoon> </devices> </domain>